配置名 默认值 英文描述  中文描述
zookeeper.connect Zookeeper host string Zookeeper主机字符串
advertised.host.name null DEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead.
Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for `host.name` if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName().
弃用的:仅当未设置`advertised.listeners'或`listeners'时使用。 请改用`advertised.listeners`。
要发布到ZooKeeper以供客户端使用的主机名。 在IaaS环境中,这可能需要与代理绑定的接口不同。 如果这没有设置,它将使用`host.name`的值(如果配置)。 否则它将使用从java.net.InetAddress.getCanonicalHostName()返回的值。
advertised.listeners null Listeners to publish to ZooKeeper for clients to use, if different than the listeners above. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used. 发布到ZooKeeper的客户端使用的侦听器,如果不同于上面的侦听器。 在IaaS环境中,这可能需要与代理绑定的接口不同。 如果没有设置,将使用`listeners`的值。
advertised.port null DEPRECATED: only used when `advertised.listeners` or `listeners` are not set. Use `advertised.listeners` instead.
The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.

弃用的:仅当未设置`advertised.listeners'或`listeners'时使用。 请改用`advertised.listeners`。
发布到ZooKeeper的端口供客户端使用。 在IaaS环境中,这可能需要与代理绑定的端口不同。 如果没有设置,它将发布代理绑定到的相同端口。
auto.create.topics.enable TRUE Enable auto creation of topic on the server 启用在服务器上自动创建topic
auto.leader.rebalance.enable TRUE Enables auto leader balancing. A background thread checks and triggers leader balance if required at regular intervals
background.threads 10 The number of threads to use for various background processing tasks 用于各种后台处理任务的线程数
broker.id -1 The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker idsstart from reserved.broker.max.id + 1. 每个broker都可以用一个唯一的非负整数id进行标识;这个id可以作为broker的“名字”,并且它的存在使得broker无须混淆consumers就可以迁移到不同的host/port上。你可以选择任意你喜欢的数字作为id,只要id是唯一的即可。
compression.type producer Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. 指定给定topic的最终压缩类型。 此配置接受标准压缩编解码器('gzip','snappy','lz4')。 它还接受“未压缩”,这相当于没有压缩; 和 'producer' ,意味着保留由producer设置的原始压缩编解码器。
delete.topic.enable FALSE Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off 启用删除topic。 如果此配置已关闭,通过管理工具删除topic将没有任何效果
host.name "" DEPRECATED: only used when `listeners` is not set. Use `listeners` instead. 
hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces
弃用的:仅在未设置`listeners`时使用。 使用`listener`s代替。代理的主机名。 如果设置,它将只绑定到此地址。 如果没有设置,它将绑定到所有接口
leader.imbalance.check.interval.seconds 300 The frequency with which the partition rebalance check is triggered by the controller
leader.imbalance.per.broker.percentage 10 The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.
listeners null Listener List - Comma-separated list of URIs we will listen on and their protocols.
 Specify hostname as 0.0.0.0 to bind to all interfaces.
 Leave hostname empty to bind to default interface.
 Examples of legal listener lists:
 PLAINTEXT://myhost:9092,TRACE://:9091
 PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
log.dir /tmp/kafka-logs The directory in which the log data is kept (supplemental for log.dirs property) 保存日志数据的目录(对log.dirs属性的补充)
log.dirs null The directories in which the log data is kept. If not set, the value in log.dir is used 保存日志数据的目录。 如果未设置,则使用log.dir中的值
log.flush.interval.messages 9223372036854775807 The number of messages accumulated on a log partition before messages are flushed to disk  消息刷新到磁盘之前在日志分区上累积的消息数
log.flush.interval.ms null The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used 任何topic中的消息在刷新到磁盘之前保存在内存中的最大时间(以毫秒为单位)。 如果未设置,则使用log.flush.scheduler.interval.ms中的值
log.flush.offset.checkpoint.interval.ms 60000 The frequency with which we update the persistent record of the last flush which acts as the log recovery point 我们更新用作日志恢复点的最后一次冲洗的持久记录的频率
log.flush.scheduler.interval.ms 9223372036854775807 The frequency in ms that the log flusher checks whether any log needs to be flushed to disk 日志刷新器检查是否有任何日志需要刷新到磁盘的频率(以毫秒为单位)
log.retention.bytes -1 The maximum size of the log before deleting it 删除日志之前的日志的最大大小
log.retention.hours 168 The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property 删除日志文件之前保留的小时数(以小时为单位),第三级为log.retention.ms属性
log.retention.minutes null The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used 在删除日志文件之前保持日志文件的分钟数(以分钟为单位),次于log.retention.ms属性。 如果未设置,则使用log.retention.hours中的值
log.retention.ms null The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used 在删除日志文件之前保留日志文件的毫秒数(以毫秒为单位),如果未设置,则使用log.retention.minutes中的值
log.roll.hours 168 The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property 新日志段推出之前的最长时间(以小时为单位),次于log.roll.ms属性
log.roll.jitter.hours 0 The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property 从logRollTimeMillis(以小时为单位)减去的最大浮动,继承于log.roll.jitter.ms属性
log.roll.jitter.ms null The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used 从logRollTimeMillis中减去的最大浮动(以毫秒为单位)。 如果未设置,则使用log.roll.jitter.hours中的值
log.roll.ms null The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used 新日志段推出之前的最长时间(以毫秒为单位)。 如果未设置,则使用log.roll.hours中的值
log.segment.bytes 1073741824 The maximum size of a single log file 单个日志文件的最大大小
log.segment.delete.delay.ms 60000 The amount of time to wait before deleting a file from the filesystem 从文件系统中删除文件之前等待的时间
message.max.bytes 1000012 The maximum size of message that the server can receive 服务器可以接收的消息的最大大小
min.insync.replicas 1 When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. 当生产者将acks设置为“all”(或“-1”)时,min.insync.replicas指定必须确认写入的副本的最小数量,以使写入被认为成功。 如果这个最小值不能满足,那么生产者将引发一个异常(NotEnoughReplicas或NotEnoughReplicasAfterAppend)。当一起使用时,min.insync.replicas和ack允许你强制更强的耐久性保证。 典型的情况是创建一个复制因子为3的主题,将min.insync.replicas设置为2,并产生一个“all”的acks。 这将确保生成器在大多数副本没有接收到写入时引发异常。
num.io.threads 8 The number of io threads that the server uses for carrying out network requests 服务器用于执行网络请求的io线程数
num.network.threads 3 the number of network threads that the server uses for handling network requests 服务器用于处理网络请求的网络线程数
num.recovery.threads.per.data.dir 1 The number of threads per data directory to be used for log recovery at startup and flushing at shutdown 每个数据目录的线程数,用于在启动时进行日志恢复并在关闭时刷新
num.replica.fetchers 1 Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. 用于从源broker复制消息的提取线程数。 增加此值可以提高跟随器broker中的I / O并行度。
offset.metadata.max.bytes 4096 The maximum size for a metadata entry associated with an offset commit 与offset提交关联的元数据条目的最大大小
offsets.commit.required.acks -1 The required acks before the commit can be accepted. In general, the default (-1) should not be overridden 可以接受提交之前所需的acks。 通常,不应覆盖默认值(-1)
offsets.commit.timeout.ms 5000 Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. 偏移提交将被延迟,直到偏移主题的所有副本都收到提交或达到此超时。 这类似于生产者请求超时。
offsets.load.buffer.size 5242880 Batch size for reading from the offsets segments when loading offsets into the cache. 用于在将偏移量装入缓存时从偏移段读取的批量大小。
offsets.retention.check.interval.ms 600000 Frequency at which to check for stale offsets 检查旧偏移的频率
offsets.retention.minutes 1440 Log retention window in minutes for offsets topic 偏移topic的日志保留时间(分钟)
offsets.topic.compression.codec 0 Compression codec for the offsets topic - compression may be used to achieve "atomic" commits 用于偏移topic的压缩编解码器 - 压缩可以用于实现“原子”提交
offsets.topic.num.partitions 50 The number of partitions for the offset commit topic (should not change after deployment) 偏移提交topic的分区数(部署后不应更改)
offsets.topic.replication.factor 3 The replication factor for the offsets topic (set higher to ensure availability). To ensure that the effective replication factor of the offsets topic is the configured value, the number of alive brokers has to be at least the replication factor at the time of the first request for the offsets topic. If not, either the offsets topic creation will fail or it will get a replication factor of min(alive brokers, configured replication factor)
offsets.topic.segment.bytes 104857600 The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads 偏移主题段字节应保持相对较小,以便于加快日志压缩和缓存加载
port 9092 DEPRECATED: only used when `listeners` is not set. Use `listeners` instead. 
the port to listen and accept connections on
弃用的:仅在未设置`listeners`时使用。 使用`listeners`代替。
端口监听和接受连接
queued.max.requests 500 The number of queued requests allowed before blocking the network threads 在阻止网络线程之前允许的排队请求数
quota.consumer.default 9223372036854775807 DEPRECATED: Used only when dynamic default quotas are not configured for  or  in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second 弃用的:仅当未在Zookeeper中或在Zookeeper中配置动态默认配额时使用。 由clientId / consumer组区分的任何消费者如果每秒获取的字节数超过此值,则会受到限制
quota.producer.default 9223372036854775807 DEPRECATED: Used only when dynamic default quotas are not configured for ,  or  in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second 弃用的:仅在未配置动态默认配额时使用,或在Zookeeper中使用。 由clientId区分的任何生产者如果每秒产生的字节数大于此值,则会受到限制
replica.fetch.min.bytes 1 Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs 每个获取响应所需的最小字节数。 如果没有足够的字节,请等待到replicaMaxWaitTimeMs
replica.fetch.wait.max.ms 500 max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics 由跟随者副本发出的每个获取器请求的最大等待时间。 此值应始终小于replica.lag.time.max.ms以防止低吞吐量topic的ISR频繁收缩
replica.high.watermark.checkpoint.interval.ms 5000 The frequency with which the high watermark is saved out to disk  high watermark保存到磁盘的频率
replica.lag.time.max.ms 10000 If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr
replica.socket.receive.buffer.bytes 65536 The socket receive buffer for network requests 用于网络请求的套接字接收缓冲区
replica.socket.timeout.ms 30000 The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms 网络请求的套接字超时。 其值应至少为replica.fetch.wait.max.ms
request.timeout.ms 30000 The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败。
socket.receive.buffer.bytes 102400 The SO_RCVBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used. 套接字服务器的SO_RCVBUF缓冲区插槽。 如果值为-1,将使用操作系统默认值。
socket.request.max.bytes 104857600 The maximum number of bytes in a socket request 套接字请求中的最大字节数
socket.send.buffer.bytes 102400 The SO_SNDBUF buffer of the socket sever sockets. If the value is -1, the OS default will be used. 套接字服务器的SO_SNDBUF缓冲区插槽。 如果值为-1,将使用操作系统默认值。
unclean.leader.election.enable TRUE Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss
zookeeper.connection.timeout.ms null The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used 客户端等待与zookeeper建立连接的最长时间。 如果未设置,则使用zookeeper.session.timeout.ms中的值
zookeeper.session.timeout.ms 6000 Zookeeper session timeout Zookeeper会话超时
zookeeper.set.acl FALSE Set client to use secure ACLs 设置客户端使用安全ACLS
broker.id.generation.enable TRUE Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed. 在服务器上启用自动broker id生成。 启用时,应检查为reserved.broker.max.id配置的值。
broker.rack null Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d` 代理的机架。 这将在机架感知复制分配中用于容错。 示例:`RACK1`,`us-east-1d`
connections.max.idle.ms 600000 Idle connections timeout: the server socket processor threads close the connections that idle more than this 空闲连接超时:服务器socket处理器线程关闭空闲的连接超过这个时间
controlled.shutdown.enable TRUE Enable controlled shutdown of the server 启用服务器的关闭受控
controlled.shutdown.max.retries 3 Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens 受控关机可能由于多种原因而失败。 这将确定发生此类故障时的重试次数
controlled.shutdown.retry.backoff.ms 5000 Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying. 在每次重试之前,系统需要时间从导致先前故障的状态(控制器故障切换,副本滞后等)恢复。 此配置确定重试之前等待的时间量。
controller.socket.timeout.ms 30000 The socket timeout for controller-to-broker channels 控制器到代理通道的套接字超时时间
default.replication.factor 1 default replication factors for automatically created topics 默认replication factors为自动创建的topic
fetch.purgatory.purge.interval.requests 1000 The purge interval (in number of requests) of the fetch request purgatory
group.max.session.timeout.ms 300000 The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. 注册的消费者的最大允许会话超时时间。 更长的超时使消费者有更多的时间在心跳检测,但花费更长的时间来检测故障。
group.min.session.timeout.ms 6000 The minimum allowed session timeout for registered consumers. Shorter timeouts leader to quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources. 注册用户的最小允许会话超时。 缩短超时导致更快的故障检测,代价是更频繁的消费者心跳,这可能压倒broker资源。
inter.broker.protocol.version 0.10.1-IV2 Specify which version of the inter-broker protocol will be used.
 This is typically bumped after all brokers were upgraded to a new version.
 Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.
指定将使用代理间协议的哪个版本。
  这通常是在所有代理升级到新版本后发生。
  一些有效值的示例是:0.8.0,0.8.1,0.8.1.1,0.8.2,0.8.2.0,0.8.2.1,0.9.0.0,0.9.0.1检查完整列表的ApiVersion。
log.cleaner.backoff.ms 15000 The amount of time to sleep when there are no logs to clean 当没有日志要清理时,睡眠的时间量
log.cleaner.dedupe.buffer.size 134217728 The total memory used for log deduplication across all cleaner threads 用于所有清除程序线程的日志重复数据删除的总内存
log.cleaner.delete.retention.ms 86400000 How long are delete records retained? 删除的记录保留多长时间?
log.cleaner.enable TRUE Enable the log cleaner process to run on the server? Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size. 启用日志清理器进程在服务器上运行? 如果使用任何包含cleanup.policy = compact的主题包括内部偏移主题,应该启用。 如果禁用,那些主题将不会被压缩并且尺寸不断增大。
log.cleaner.io.buffer.load.factor 0.9 Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions 日志清除器重复数据删除缓冲区负载因子。 重复数据删除缓冲区已满的百分比。 较高的值将允许同时清除更多的日志,但会导致更多的哈希冲突
log.cleaner.io.buffer.size 524288 The total memory used for log cleaner I/O buffers across all cleaner threads 用于清理所有清除程序线程的日志清除器I / O缓冲区的总内存
log.cleaner.io.max.bytes.per.second 1.7976931348623157E308 The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average 日志清理器将被节流,以使其读取和写入i / o的总和将小于平均值的此值
log.cleaner.min.cleanable.ratio 0.5 The minimum ratio of dirty log to total log for a log to eligible for cleaning 脏日志与日志的总日志的最小比率,以符合清理条件
log.cleaner.min.compaction.lag.ms 0 The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. 消息在日志中保持压缩的最小时间。 仅适用于正在压缩的日志。
log.cleaner.threads 1 The number of background threads to use for log cleaning 用于日志清理的后台线程数
log.cleanup.policy [delete] The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact" 超出保留时间段的段的默认清除策略。 逗号分隔的有效策略列表。 有效的策略是:“delete”和“compact”
log.index.interval.bytes 4096 The interval with which we add an entry to the offset index 我们向偏移索引添加条目的间隔
log.index.size.max.bytes 10485760 The maximum size in bytes of the offset index 偏移索引的最大大小(以字节为单位)
log.message.format.version 0.10.1-IV2 Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. 指定代理将用于将消息附加到日志的消息格式版本。 该值应为有效的ApiVersion。 一些示例是:0.8.2,0.9.0.0,0.10.0,检查ApiVersion以获取更多详细信息。 通过设置特定的消息格式版本,用户证明磁盘上的所有现有消息小于或等于指定的版本。 不正确地设置此值将导致旧版本的客户中断,因为他们将接收到他们不理解的格式的邮件。
log.message.timestamp.difference.max.ms 9.22337E+18 The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime. 代理接收消息时的时间戳和消息中指定的时间戳之间允许的最大差异。 如果log.message.timestamp.type = CreateTime,则如果时间戳的差异超过此阈值,则会拒绝邮件。 如果log.message.timestamp.type = LogAppendTime,则忽略此配置。
log.message.timestamp.type CreateTime Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime` 定义消息中的时间戳是消息创建时间还是日志附加时间。 该值应该是“创建时间”或“LogAppend时间”
log.preallocate FALSE Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true. 应该在创建新段时预分配文件? 如果您在Windows上使用Kafka,则可能需要将其设置为true。
log.retention.check.interval.ms 300000 The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion 日志清除程序检查任何日志是否有资格删除的频率(以毫秒为单位)
max.connections.per.ip 2147483647 The maximum number of connections we allow from each ip address 我们从每个IP地址允许的最大连接数
max.connections.per.ip.overrides "" Per-ip or hostname overrides to the default maximum number of connections per-ip或hostname覆盖默认最大连接数
num.partitions 1 The default number of log partitions per topic 每个topic的默认日志分区数
principal.builder.class class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder The fully qualified name of a class that implements the PrincipalBuilder interface, which is currently used to build the Principal for connections with the SSL SecurityProtocol. 实现Principal Builder接口的类的完全限定名,该接口当前用于构建与SSL安全协议的连接的Principal。
producer.purgatory.purge.interval.requests 1000 The purge interval (in number of requests) of the producer request purgatory 生产者请求purgatory的清除间隔(请求数)
replica.fetch.backoff.ms 1000 The amount of time to sleep when fetch partition error occurs. 发生抓取分区错误时休眠的时间。
replica.fetch.max.bytes 1048576 The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress can be made. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). 尝试为每个分区提取的消息的字节数。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保可以进行进度。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。
replica.fetch.response.max.bytes 10485760 Maximum bytes expected for the entire fetch response. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that progress can be made. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). 整个获取响应所需的最大字节数。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保可以进行进度。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。
reserved.broker.max.id 1000 Max number that can be used for a broker.id 可用于broker.id的最大数量
sasl.enabled.mechanisms [GSSAPI] The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default. Kafka服务器中启用的SASL机制列表。 该列表可以包含安全提供者可用的任何机制。 默认情况下仅启用GSSAPI。
sasl.kerberos.kinit.cmd /usr/bin/kinit Kerberos kinit command path. Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin 60000 Login thread sleep time between refresh attempts. 登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.principal.to.local.rules [DEFAULT] A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see  security authorization and acls. 用于从主体名称到短名称(通常是操作系统用户名)的映射规则列表。 按顺序评估规则,并且使用与主体名称匹配的第一个规则将其映射到短名称。 将忽略列表中的任何后续规则。 默认情况下,{username} / {hostname} @ {REALM}形式的主体名称映射到{username}。 有关格式的详细信息,请参阅安全授权和acls。
sasl.kerberos.service.name null The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.kerberos.ticket.renew.jitter 0.05 Percentage of random jitter added to the renewal time. 随机抖动的百分比添加到更新时间。
sasl.kerberos.ticket.renew.window.factor 0.8 Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.
sasl.mechanism.inter.broker.protocol GSSAPI SASL mechanism used for inter-broker communication. Default is GSSAPI. SASL机制用于代理间通信。 默认为GSSAPI。
security.inter.broker.protocol PLAINTEXT Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. 用于在代理之间通信的安全协议。 有效值为:PLAINTEXT,SSL,SASL PLAINTEXT,SASL SSL。
ssl.cipher.suites null A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. 密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。
ssl.client.auth none Configures kafka broker to request client authentication. The following settings are common:   ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. 配置kafka代理以请求客户端认证。 以下设置很常见:ssl.client.auth =必需如果设置为必需,则需要客户端身份验证。 ssl.client.auth = requested这意味着客户端认证是可选的。 与请求不同,如果设置此选项,客户端可以选择不提供有关本身的身份验证信息ssl.client.auth = none这意味着不需要客户端身份验证。
ssl.enabled.protocols [TLSv1.2, TLSv1.1, TLSv1] The list of protocols enabled for SSL connections. 为SSL连接启用的协议列表。
ssl.key.password null The password of the private key in the key store file. This is optional for client. 私钥到密钥库文件的密码。 这对于客户端是可选的。
ssl.keymanager.algorithm SunX509 The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. 密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.keystore.location null The location of the key store file. This is optional for client and can be used for two-way authentication for client. 密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.password null The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.  密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.keystore.type JKS The file format of the key store file. This is optional for client. 密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocol TLS The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. 用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值为TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.provider null The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. 用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.trustmanager.algorithm PKIX The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. 信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
ssl.truststore.location null The location of the trust store file.  信任存储文件的位置。
ssl.truststore.password null The password for the trust store file.  信任存储文件的密码。
ssl.truststore.type JKS The file format of the trust store file. 信任存储文件的文件格式。
authorizer.class.name "" The authorizer class that should be used for authorization 应该用于授权的授权程序类
metric.reporters [] A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. 用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples 2 The number of samples maintained to compute metrics. 维持计算度量的样本数。
metrics.sample.window.ms 30000 The window of time a metrics sample is computed over. 计算度量样本的时间窗口。
quota.window.num 11 The number of samples to retain in memory for client quotas 要在客户端配额的内存中保留的样本数
quota.window.size.seconds 1 The time span of each sample for client quotas 客户端配额的每个样本的时间跨度
replication.quota.window.num 11 The number of samples to retain in memory for replication quotas 要在复制配额的内存中保留的样本数
replication.quota.window.size.seconds 1 The time span of each sample for replication quotas 复制配额的每个样本的时间跨度
ssl.endpoint.identification.algorithm null The endpoint identification algorithm to validate server hostname using server certificate.  端点标识算法,使用服务器证书验证服务器主机名。
ssl.secure.random.implementation null The SecureRandom PRNG implementation to use for SSL cryptography operations.  用于SSL加密操作的SecureRandom PRNG实现。
zookeeper.sync.time.ms 2000 How far a ZK follower can be behind a ZK leader ZK leader背后有多远ZK follower
cleanup.policy [delete] A string that is either "delete" or "compact". This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable log compaction on the topic. "delete" or "compact"的字符串。 此字符串指定要在旧日志段上使用的保留策略。 默认策略(“删除”)会在达到保留时间或大小限制时丢弃旧细分。 “compact”设置将对主题启用日志压缩。
compression.type producer Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', lz4). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. 指定给定主题的最终压缩类型。 此配置接受标准压缩编解码器('gzip','snappy',lz4)。 它还接受“未压缩”,这相当于没有压缩; 和“生成器”,意味着保留由生产者设置的原始压缩编解码器。
delete.retention.ms 86400000 The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). 保留删除日志压缩主题的逻辑删除标记的时间量。 此设置还给出了消费者必须完成读取的时间的界限,如果它们从偏移0开始,以确保它们获得最后阶段的有效快照(否则可以在完成扫描之前收集删除的tombstones)。
file.delete.delay.ms 60000 The time to wait before deleting a file from the filesystem 从文件系统中删除文件之前等待的时间
flush.messages 9.22337E+18 This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section). 此设置允许指定我们将强制写入日志的数据的fsync的间隔。 例如,如果这被设置为1,我们将在每个消息后fsync; 如果是5,我们将在每5个消息后fsync。 一般来说,我们建议您不要设置此值,并使用复制持久性,并允许操作系统的后台刷新功能,因为它更高效。 此设置可以在每个主题的基础上覆盖(请参阅每主题配置部分)。
flush.ms 9.22337E+18 This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. 此设置允许指定我们将强制写入日志的数据的fsync的时间间隔。 例如,如果这被设置为1000,我们将在1000毫秒过去后fsync。 一般来说,我们建议您不要设置此值,并使用复制持久性,并允许操作系统的后台刷新功能,因为它更高效。
follower.replication.throttled.replicas [] A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. 在follower端应该限制日志复制的副本列表。 该列表应该以[PartitionId]的形式描述一组副本:[BrokerId],[PartitionId]:[BrokerId]:...或者通配符“*”可以用于限制此主题的所有副本。
index.interval.bytes 4096 This setting controls how frequently Kafka adds an index entry to it's offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this. 此设置控制Kafka向其偏移索引添加索引条目的频率。 默认设置确保我们大约每4096个字节对消息进行索引。 更多索引允许读取更接近日志中的确切位置,但使索引更大。 你可能不需要改变这个。
leader.replication.throttled.replicas [] A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. 在leader端应该限制日志复制的副本列表。 该列表应该以[PartitionId]的形式描述一组副本:[BrokerId],[PartitionId]:[BrokerId]:...或者通配符“*”可以用于限制此主题的所有副本。
max.message.bytes 1000012 This is largest message size Kafka will allow to be appended. Note that if you increase this size you must also increase your consumer's fetch size so they can fetch messages this large. 这是Kafka允许附加的最大message大小。 请注意,如果您增加此大小,您还必须增加消费者抓取大小,以便他们可以抓取这么大的message。
message.format.version 0.10.1-IV2 Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. 指定代理将用于将消息附加到日志的消息格式版本。 该值应为有效的ApiVersion。 一些示例是:0.8.2,0.9.0.0,0.10.0,检查ApiVersion以获取更多详细信息。 通过设置特定的消息格式版本,用户证明磁盘上的所有现有消息小于或等于指定的版本。 不正确地设置此值将导致旧版本的客户中断,因为他们将收到他们不明白的格式的message。
message.timestamp.difference.max.ms 9.22337E+18 The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. 代理接收消息时的时间戳和消息中指定的时间戳之间允许的最大差异。 如果message.timestamp.type = CreateTime,如果时间戳的差异超过此阈值,则会拒绝message。 如果message.timestamp.type = LogAppendTime,则忽略此配置。
message.timestamp.type CreateTime Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime` 定义消息中的时间戳是消息创建时间还是日志附加时间。 该值应该是“创建时间”或“LogAppend时间”
min.cleanable.dirty.ratio 0.5 This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. 此配置控制日志压缩程序尝试清理日志的频率(假设启用日志压缩)。 默认情况下,我们将避免清除超过50%的日志已压缩的日志。 此比率限制了日志中由重复项浪费的最大空间(在50%处,最多50%的日志可能是重复项)。 较高的比率意味着更少,更有效的清洁,但是将意味着更多的浪费在日志中的空间。
min.compaction.lag.ms 0 The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. 消息在日志中保持压缩的最小时间。 仅适用于正在压缩的日志。
min.insync.replicas 1 When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. 当生产者将ack设置为“all”(或“-1”)时,min.insync.replicas指定必须确认写入的副本的最小数量,以使写入被认为成功。 如果这个最小值不能满足,那么生产者将引发一个异常(NotEnoughReplicas或NotEnoughReplicasAfterAppend)。当一起使用时,min.insync.replicas和ack允许你强制更强的耐久性保证。 典型的情况是创建一个复制因子为3的主题,将min.insync.replicas设置为2,并产生一个“all”的acks。 这将确保生成器在大多数副本没有接收到写入时引发异常。
preallocate FALSE Should pre allocate file when create new segment? 应该在创建新段时预分配文件?
retention.bytes -1 This configuration controls the maximum size a log can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. 此配置控制日志可以增长到的最大大小,直到我们放弃旧的日志段以释放空间,如果我们使用“删除”保留策略。 默认情况下,没有大小限制只有时间限制。
retention.ms 604800000 This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. 此配置控制我们将保留日志的最大时间,直到我们放弃旧的日志段以释放空间,如果我们使用“删除”保留策略。 这代表了关于消费者必须读取其数据的SLA。
segment.bytes 1073741824 This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. 此配置控制日志的段文件大小。 保留和清除总是一次执行一个文件,因此较大的段大小意味着更少的文件,但对保留的粒度控制较少。
segment.index.bytes 10485760 This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting. 此配置控制将偏移映射到文件位置的索引的大小。 我们预分配此索引文件,并收缩日志滚动后。 您通常不需要更改此设置。
segment.jitter.ms 0 The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling
segment.ms 604800000 This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. 此配置控制Kafka将强制日志滚动(即使分段文件未满)的时间段,以确保保留可以删除或压缩旧数据。
unclean.leader.election.enable TRUE Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss 指示是否启用不在ISR集中的副本作为最后手段被选为leader,即使这样做可能会导致数据丢失
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). 用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集 的服务器(您可能想要多个服务器,以防万一服务器关闭)。
key.serializer Serializer class for key that implements the Serializer interface. 实现Serializer接口的密钥的Serializer类。
value.serializer Serializer class for value that implements the Serializer interface. 用于实现Serializer接口的值的Serializer类。
acks 1 The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the  durability of records that are sent. The following settings are allowed:   acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. 生产者需要领导者在考虑请求完成之前已经接收的确认的数目。这控制发送的记录的持久性。允许以下设置:acks = 0如果设置为零,则生产者将不会等待来自服务器的任何确认。该记录将立即添加到套接字缓冲区并考虑发送。在这种情况下,不能保证服务器已经接收到记录,并且重试配置将不会生效(因为客户端通常不知道任何故障)。每个记录返回的偏移将始终设置为-1。 acks = 1这将意味着领导者将记录写入其本地日志,但将作出响应,而不等待所有追随者的完全确认。在这种情况下,如果领导者在确认记录之后立即失败,但在追随者复制它之前失败,则记录将丢失。 acks = all这意味着领导者将等待完整的同步副本集来确认记录。这保证只要至少一个同步中的副本保持活动,记录就不会丢失。这是最强的可用保证。这相当于acks = -1设置。
buffer.memory 33554432 The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception.This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. 生产者可以用来缓冲等待发送到服务器的记录的内存的总字节数。 如果记录的发送速度比可以传递到服务器的速度快,那么生产者将阻塞max.block.ms,之后将抛出异常。这个设置应该大致对应于生产者将使用的总内存,但不是严格的 因为并不是生产者使用的所有内存都用于缓冲。 一些额外的内存将用于压缩(如果启用压缩)以及用于维护in-flight请求
compression.type none The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid  values are none, gzip, snappy, or lz4. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). 生产者生成的所有数据的压缩类型。 默认值为none(即不压缩)。 有效值为none,gzip,snappy或lz4。 压缩是完全批次的数据,因此批量化的效果也将影响压缩比(更多的批次意味着更好的压缩)。
retries 0 Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. 设置大于零的值将导致客户端重新发送任何发送失败且可能存在临时错误的记录。 请注意,此重试与客户端在接收到错误时重新发送记录没有什么不同。 允许重试而不将max.in.flight.requests.per.connection设置为1将潜在地更改记录的顺序,因为如果两个批次发送到单个分区,并且第一个失败并重试,但第二个成功,则记录 在第二批可以先出现。
ssl.key.password null The password of the private key in the key store file. This is optional for client. 密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.location null The location of the key store file. This is optional for client and can be used for two-way authentication for client. 密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.password null The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.  密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.location null The location of the trust store file.  信任存储文件的位置。
ssl.truststore.password null The password for the trust store file.  信任存储文件的密码。
batch.size 16384 The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. 每当多个记录被发送到同一分区时,生产者将尝试将批记录一起分成较少的请求。 这有助于客户端和服务器上的性能。 此配置控制默认批量大小(以字节为单位)。 不会尝试大于此大小的批记录。 发送到代理的请求将包含多个批次,每个分区一个,可用于发送的数据。 小批量大小将使批处理不那么常见,并可能降低吞吐量(批量大小为零将完全禁用批处理)。 非常大的批量大小可能更浪费地使用存储器,因为我们将总是分配预定额外记录的指定批量大小的缓冲器。
client.id "" An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. 在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
connections.max.idle.ms 540000 Close idle connections after the number of milliseconds specified by this config. 在此配置指定的毫秒数后关闭空闲连接。
linger.ms 0 The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absense of load. 生成器将在请求传输之间到达的任何记录集合到单个批处理请求中。通常这仅在负载下发生,当记录到达比它们可以被发送时快。然而在某些情况下,即使在中等负载下,客户端也可能希望减少请求的数量。该设置通过添加少量的人为延迟来实现这一点,即,不是立即发出记录,而是生产者将等待给定的延迟,以允许发送其他记录,使得发送可以被批处理在一起。这可以被认为类似于TCP中的Nagle算法。这个设置给出了批处理的延迟的上限:一旦我们获得一个分区的batch.size值的记录,它将被立即发送,而不管这个设置,但是如果我们对这个分区累积的字节数少于“在指定的时间等待更多的记录显示。此设置默认为0(即没有延迟)。例如,设置linger.ms = 5将具有减少发送的请求数量的效果,但是对于在无负载的情况下发送的记录,会增加5ms的延迟。
max.block.ms 60000 The configuration controls how long KafkaProducer.send() and KafkaProducer.partitionsFor() will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout. 配置控制KafkaProducer.send()和KafkaProducer.partitionsFor()将阻塞的时间。这些方法可能被阻止,因为缓冲区已满或元数据不可用。用户提供的序列化程序或分区程序中的阻塞将不会计入此超时 。
max.request.size 1048576 The maximum size of a request in bytes. This is also effectively a cap on the maximum record size. Note that the server has its own cap on record size which may be different from this. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. 请求的最大大小(以字节为单位)。 这也是有效的最大记录大小的上限。 请注意,服务器有自己的记录大小上限,可能与此不同。 此设置将限制生产者在单个请求中发送的记录批次数,以避免发送大量请求。
partitioner.class class org.apache.kafka.clients.producer.internals.DefaultPartitioner Partitioner class that implements the Partitioner interface. 实现分区器接口的分区器类。
receive.buffer.bytes 32768 The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. 读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms 30000 The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.name null The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanism GSSAPI SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocol PLAINTEXT Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. 用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes 131072 The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. 发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols [TLSv1.2, TLSv1.1, TLSv1] The list of protocols enabled for SSL connections. 为SSL连接启用的协议列表。
ssl.keystore.type JKS The file format of the key store file. This is optional for client. 密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocol TLS The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. 用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值为TLS,TLSv1.1和TLSv1.2。 在较早的JVM中可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.provider null The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. 用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.type JKS The file format of the trust store file. 信任存储文件的文件格式。
timeout.ms 30000 The configuration controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowledgment requirements the producer has specified with the acks configuration. If the requested number of acknowledgments are not met when the timeout elapses an error will be returned. This timeout is measured on the server side and does not include the network latency of the request. 配置控制服务器等待来自跟随者的确认以满足生产者使用ack配置指定的确认要求的最大时间量。 如果在超时过期时未满足所请求的确认数量,则将返回错误。 此超时在服务器端测量,不包括请求的网络延迟。
block.on.buffer.full FALSE When our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. By default this setting is false and the producer will no longer throw a BufferExhaustException but instead will use the max.block.ms value to block, after which it will throw a TimeoutException. Setting this property to true will set the max.block.ms to Long.MAX_VALUE. Also if this property is set to true, parameter metadata.fetch.timeout.ms is no longer honored.This parameter is deprecated and will be removed in a future release. Parameter max.block.ms should be used instead. 当我们的内存缓冲区用尽时,我们必须停止接受新的记录(块)或抛出错误。 默认情况下,此设置为false,生成器将不再抛出一个BufferExhaustException,而是使用max.block.ms值来阻止,之后将抛出一个TimeoutException。 将此属性设置为true将max.block.ms设置为Long.MAX_VALUE。 此外,如果此属性设置为true,则不再支持参数metadata.fetch.timeout.ms.This参数已弃用,将在以后的版本中删除。 应使用参数max.block.ms。
interceptor.classes null A list of classes to use as interceptors. Implementing the ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. 要用作拦截器的类的列表。 实现ProducerInterceptor接口允许您在生成器接收到的记录发布到Kafka集群之前拦截(并且可能变异)记录。 默认情况下,没有拦截器。
max.in.flight.requests.per.connection 5 The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). 客户端在阻止之前在单个连接上发送的未确认请求的最大数量。 请注意,如果此设置设置为大于1并且发送失败,则可能会由于重试(即启用重试)而导致消息重新排序。
metadata.fetch.timeout.ms 60000 The first time data is sent to a topic we must fetch metadata about that topic to know which servers host the topic's partitions. This config specifies the maximum time, in milliseconds, for this fetch to succeed before throwing an exception back to the client. 第一次将数据发送到主题时,我们必须获取有关该主题的元数据,以了解哪些服务器托管主题的分区。 此配置指定在将异常返回到客户端之前此次提取成功的最大时间(以毫秒为单位)。
metadata.max.age.ms 300000 The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. 即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters [] A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. 用作度量报告器的类的列表。 实现MetricReporter接口允许插入将被通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples 2 The number of samples maintained to compute metrics. 维持计算度量的样本数。
metrics.sample.window.ms 30000 The window of time a metrics sample is computed over. 计算度量样本的时间窗口。
reconnect.backoff.ms 50 The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. 尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
retry.backoff.ms 100 The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. 尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd /usr/bin/kinit Kerberos kinit command path. Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin 60000 Login thread sleep time between refresh attempts. 登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter 0.05 Percentage of random jitter added to the renewal time. 添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor 0.8 Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. 登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suites null A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. 密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithm null The endpoint identification algorithm to validate server hostname using server certificate.  端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithm SunX509 The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. 密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementation null The SecureRandom PRNG implementation to use for SSL cryptography operations.  用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithm PKIX The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. 信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). 用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
key.deserializer Deserializer class for key that implements the Deserializer interface. 用于实现Deserializer接口的键的Deserializer类。
value.deserializer Deserializer class for value that implements the Deserializer interface. 用于实现Deserializer接口的值的Deserializer类。
fetch.min.bytes 1 The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency. 服务器应该为获取请求返回的最小数据量。 如果数据不足,请求将在应答请求之前等待多少数据累积。 默认设置为1字节表示只要单个字节的数据可用或获取请求超时等待数据到达,就会应答获取请求。 将此值设置为大于1的值将导致服务器等待大量数据累积,这可以以一些额外延迟为代价提高服务器吞吐量。
group.id "" A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy. 标识此消费者所属的使用者组的唯一字符串。 如果消费者通过使用subscribe(主题)或基于Kafka的偏移管理策略使用组管理功能,则需要此属性。
heartbeat.interval.ms 3000 The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. 使用Kafka的组管理设施时,心跳到消费者协调器之间的预期时间。 心跳用于确保消费者的会话保持活动并且当新消费者加入或离开组时促进重新平衡。 该值必须设置为低于session.timeout.ms,但通常应设置为不高于该值的1/3。 它可以调整得更低,以控制正常再平衡的预期时间。
max.partition.fetch.bytes 1048576 The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size 服务器将返回的每个分区的最大数据量。 如果提取的第一个非空分区中的第一条消息大于此限制,则仍会返回消息以确保消费者可以取得进展。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。 请参阅fetch.max.bytes以限制使用者请求大小
session.timeout.ms 10000 The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms. 用于在使用Kafka的组管理工具时检测用户故障的超时。 消费者发送周期性心跳以向代理指示其活跃度。 如果在此会话超时期满之前代理没有收到心跳,则代理将从组中删除此使用者并启动重新平衡。 请注意,该值必须在代理配置中通过group.min.session.timeout.ms和group.max.session.timeout.ms配置的允许范围内。
ssl.key.password null The password of the private key in the key store file. This is optional for client. 密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.location null The location of the key store file. This is optional for client and can be used for two-way authentication for client. 密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.password null The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.  密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.location null The location of the trust store file.  信任存储文件的位置。
ssl.truststore.password null The password for the trust store file.  信任存储文件的密码。
auto.offset.reset latest What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offsetlatest: automatically reset the offset to the latest offsetnone: throw exception to the consumer if no previous offset is found for the consumer's groupanything else: throw exception to the consumer. 当Kafka中没有初始偏移或如果当前​​偏移在服务器上不再存在时(例如,因为该数据已被删除),该怎么办:earliest:自动将偏移重置为最早偏移:自动将偏移重置为最新的offsetnone:如果没有为消费者的组找到任何以前的偏移,向消费者抛出异常anyany:throw exception to the consumer。
connections.max.idle.ms 540000 Close idle connections after the number of milliseconds specified by this config. 在此配置指定的毫秒数后关闭空闲连接。
enable.auto.commit TRUE If true the consumer's offset will be periodically committed in the background. 如果为true,则消费者的偏移量将在后台定期提交。
exclude.internal.topics TRUE Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it. 来自内部主题(如抵消)的记录是否应向客户公开。 如果设置为true,则从内部主题接收记录的唯一方法是订阅。
fetch.max.bytes 52428800 The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. 服务器应针对抓取请求返回的最大数据量。 这不是绝对最大值,如果提取的第一个非空分区中的第一个消息大于此值,则仍会返回消息以确保消费者可以取得进展。 代理接受的最大消息大小通过message.max.bytes(broker config)或max.message.bytes(topic config)定义。 请注意,消费者并行执行多个提取。
max.poll.interval.ms 300000 The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.  使用消费组管理时poll()调用之间的最大延迟。 这提供了消费者在获取更多记录之前可以空闲的时间量的上限。 如果在超时到期之前未调用poll(),则消费者被视为失败,并且组将重新平衡以便将分区重新分配给另一个成员。
max.poll.records 500 The maximum number of records returned in a single call to poll(). 在对poll()的单个调用中返回的最大记录数。
partition.assignment.strategy [class org.apache.kafka.clients.consumer.RangeAssignor] The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used 分区分配策略的类名,客户端将在分组管理使用时用于在消费者实例之间分配分区所有权
receive.buffer.bytes 65536 The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. 读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms 305000 The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.name null The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanism GSSAPI SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocol PLAINTEXT Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. 用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes 131072 The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. 发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols [TLSv1.2, TLSv1.1, TLSv1] The list of protocols enabled for SSL connections. 为SSL连接启用的协议列表。
ssl.keystore.type JKS The file format of the key store file. This is optional for client. 密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocol TLS The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. 用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值是TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.provider null The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. 用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.type JKS The file format of the trust store file. 信任存储文件的文件格式。
auto.commit.interval.ms 5000 The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true. 如果enable.auto.commit设置为true,则消费者偏移的频率(以毫秒为单位)将自动提交到Kafka。
check.crcs TRUE Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. 自动检查所消耗记录的CRC32。 这确保没有发生消息的在线或磁盘损坏。 此检查会增加一些开销,因此在寻求极高性能的情况下可能会禁用此检查。
client.id "" An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. 在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
fetch.max.wait.ms 500 The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes. 如果没有足够的数据来立即满足fetch.min.bytes给出的要求,则服务器在应答提取请求之前将阻止的最长时间。
interceptor.classes null A list of classes to use as interceptors. Implementing the ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors. 要用作拦截器的类的列表。 实现ConsumerInterceptor接口允许您拦截(并且可能变异)消费者接收的记录。 默认情况下,没有拦截器。
metadata.max.age.ms 300000 The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. 即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters [] A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. 用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples 2 The number of samples maintained to compute metrics. 维持计算度量的样本数。
metrics.sample.window.ms 30000 The window of time a metrics sample is computed over. 计算度量样本的时间窗口。
reconnect.backoff.ms 50 The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. 尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
retry.backoff.ms 100 The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. 尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd /usr/bin/kinit Kerberos kinit command path. Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin 60000 Login thread sleep time between refresh attempts. 登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter 0.05 Percentage of random jitter added to the renewal time. 添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor 0.8 Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. 登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suites null A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. 密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithm null The endpoint identification algorithm to validate server hostname using server certificate.  端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithm SunX509 The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. 密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementation null The SecureRandom PRNG implementation to use for SSL cryptography operations.  用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithm PKIX The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. 信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
group.id A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group. 唯一标识此消费者所属的消费者进程组的字符串。 通过设置相同的组ID,多个进程指示它们都是同一使用者组的一部分。
zookeeper.connect Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3.
        
    The server may also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. If so the consumer should use the same chroot path in its connection string. For example to give a chroot path of /chroot/path you would give the connection string as  hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.
consumer.id null
Generated automatically if not set.
未设置时自动生成。 
socket.timeout.ms 30 * 1000 The socket timeout for network requests. The actual timeout set will be max.fetch.wait + socket.timeout.ms. 网络请求的套接字超时。 实际的超时设置将是max.fetch.wait + socket.timeout.ms。
socket.receive.buffer.bytes 64 * 1024 The socket receive buffer for network requests 用于网络请求的套接字接收缓冲区
fetch.message.max.bytes 1024 * 1024 The number of bytes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into memory for each partition, so this helps control the memory used by the consumer. The fetch request size must be at least as large as the maximum message size the server allows or else it is possible for the producer to send messages larger than the consumer can fetch. 尝试在每个获取请求中为每个主题分区获取的消息的字节数。 这些字节将被读入每个分区的内存,因此这有助于控制消费者使用的内存。 获取请求大小必须至少与服务器允许的最大消息大小一样大,否则生产者可能发送大于客户可以提取的消息。
num.consumer.fetchers 1 The number fetcher threads used to fetch data. 用于提取数据的提取线程数。
auto.commit.enable TRUE If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be used when the process fails as the position from which the new consumer will begin. 如果为true,请定期向ZooKeeper提交消费者已经获取的消息的偏移量。 当进程失败作为新消费者将从其开始的位置时,将使用该提交的偏移。
auto.commit.interval.ms 60 * 1000 The frequency in ms that the consumer offsets are committed to zookeeper. 消费者抵消的频率以毫秒为单位提交给zookeeper。
queued.max.message.chunks 2 Max number of message chunks buffered for consumption. Each chunk can be up to fetch.message.max.bytes. 缓冲消耗的消息块的最大数量。 每个块最多可以达到fetch.message.max.bytes。
rebalance.max.retries 4 When a new consumer joins a consumer group the set of consumers attempt to "rebalance" the load to assign partitions to each consumer. If the set of consumers changes while this assignment is taking place the rebalance will fail and retry. This setting controls the maximum number of attempts before giving up. 当新消费者加入消费者组时,该组消费者尝试“重新平衡”负载以向每个消费者分配分区。 如果在进行此分配时,消费者集合发生变化,则重新平衡将失败并重试。 此设置控制放弃之前的最大尝试次数。
fetch.min.bytes 1 The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. 服务器应该为获取请求返回的最小数据量。 如果数据不足,请求将在应答请求之前等待多少数据累积。
fetch.wait.max.ms 100 The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy fetch.min.bytes 如果没有足够的数据立即满足fetch.min.bytes,则服务器在应答提取请求之前将阻止的最长时间
rebalance.backoff.ms 2000 Backoff time between retries during rebalance. If not set explicitly, the value in zookeeper.sync.time.ms is used.
      
在重新平衡期间重试之间的退避时间。 如果未明确设置,则使用zookeeper.sync.time.ms中的值。 
refresh.leader.backoff.ms 200 Backoff time to wait before trying to determine the leader of a partition that has just lost its leader. 在尝试确定刚刚失去其领导者的分区的领导者之前等待的退避时间。
auto.offset.reset largest
What to do when there is no initial offset in ZooKeeper or if an offset is out of range:* smallest : automatically reset the offset to the smallest offset* largest : automatically reset the offset to the largest offset* anything else: throw exception to the consumer
当ZooKeeper中没有初始偏移或偏移超出范围时,该怎么办:*最小:将偏移自动重置为最小偏移*最大:自动将偏移重置为最大偏移*其他:throw exception to the消费者 
consumer.timeout.ms -1 Throw a timeout exception to the consumer if no message is available for consumption after the specified interval 如果在指定的时间间隔后没有消息可用,则向使用者抛出超时异常
exclude.internal.topics TRUE Whether messages from internal topics (such as offsets) should be exposed to the consumer. 来自内部主题的消息(如偏移量)是否应向消费者公开。
client.id group id value The client id is a user-specified string sent in each request to help trace calls. It should logically identify the application making the request. 客户端标识是在每个请求中发送的用户指定的字符串,以帮助跟踪调用。 它应该在逻辑上标识发出请求的应用程序。
zookeeper.session.timeout.ms  6000 ZooKeeper session timeout. If the consumer fails to heartbeat to ZooKeeper for this period of time it is considered dead and a rebalance will occur. ZooKeeper会话超时。 如果消费者在这段时间内没有心跳到ZooKeeper,它被认为死了,并且将发生重新平衡。
zookeeper.connection.timeout.ms 6000 The max time that the client waits while establishing a connection to zookeeper. 客户端在建立与zookeeper的连接时等待的最长时间。
zookeeper.sync.time.ms  2000 How far a ZK follower can be behind a ZK leader ZK领导者背后有多远ZK追随者
offsets.storage zookeeper Select where offsets should be stored (zookeeper or kafka). 选择偏移量应存储在哪里(zookeeper或kafka)。
offsets.channel.backoff.ms 1000 The backoff period when reconnecting the offsets channel or retrying failed offset fetch/commit requests. 重新连接偏移通道或重试失败的偏移提取/提交请求时的退避周期。
offsets.channel.socket.timeout.ms 10000 Socket timeout when reading responses for offset fetch/commit requests. This timeout is also used for ConsumerMetadata requests that are used to query for the offset manager. 读取偏移提取/提交请求的响应时的套接字超时。 此超时还用于用于查询偏移管理器的ConsumerMetadata请求。
offsets.commit.max.retries 5 Retry the offset commit up to this many times on failure. This retry count only applies to offset commits during shut-down. It does not apply to commits originating from the auto-commit thread. It also does not apply to attempts to query for the offset coordinator before committing offsets. i.e., if a consumer metadata request fails for any reason, it will be retried and that retry does not count toward this limit. 在失败时重试偏移提交多次。 此重试计数仅适用于关闭期间的偏移提交。 它不适用于源自自动提交线程的提交。 它也不适用于在提交偏移之前查询偏移协调器的尝试。 即,如果消费者元数据请求由于任何原因失败,则它将被重试,并且重试不计入该限制。
dual.commit.enabled TRUE If you are using "kafka" as offsets.storage, you can dual commit offsets to ZooKeeper (in addition to Kafka). This is required during migration from zookeeper-based offset storage to kafka-based offset storage. With respect to any given consumer group, it is safe to turn this off after all instances within that group have been migrated to the new version that commits offsets to the broker (instead of directly to ZooKeeper). 如果你使用“kafka”作为offsets.storage,你可以双向提交偏移到ZooKeeper(除了Kafka)。 这在从基于zookeeper的偏移存储迁移到基于kafka的偏移存储期间是必需的。 对于任何给定的消费者组,可以在该组中的所有实例都迁移到向代理提交偏移量的新版本(而不是直接到ZooKeeper)后将其关闭。
partition.assignment.strategy range Select between the "range" or "roundrobin" strategy for assigning partitions to consumer streams.The round-robin partition assignor lays out all the available partitions and all the available consumer threads. It then proceeds to do a round-robin assignment from partition to consumer thread. If the subscriptions of all consumer instances are identical, then the partitions will be uniformly distributed. (i.e., the partition ownership counts will be within a delta of exactly one across all consumer threads.) Round-robin assignment is permitted only if: (a) Every topic has the same number of streams within a consumer instance (b) The set of subscribed topics is identical for every consumer instance within the group. Range partitioning works on a per-topic basis. For each topic, we lay out the available partitions in numeric order and the consumer threads in lexicographic order. We then divide the number of partitions by the total number of consumer streams (threads) to determine the number of partitions to assign to each consumer. If it does not evenly divide, then the first few consumers will have one extra partition. 在用于向消费者流分配分区的“范围”或“roundrobin”策略之间进行选择。循环分区分配器将所有可用分区和所有可用的消费者线程布局。 然后它继续进行从分区到消费者线程的轮询分配。 如果所有消费者实例的预订相同,则分区将均匀分布。 (即,分区所有权计数将在所有消费者线程中在正好一个的delta内。)只有在以下情况下才允许循环分配:(a)每个主题在消费者实例内具有相同数量的流(b)的订阅主题对于组内的每个消费者实例是相同的。 范围分区在每个主题的基础上工作。对于每个主题,我们按数字顺序布置可用分区,并按字典顺序布置使用线程。 然后,我们将分区数除以消费者流(线程)的总数,以确定要分配给每个使用者的分区数。 如果它不均匀分配,那么前几个消费者将有一个额外的分区。
config.storage.topic kafka topic to store configs kafka主题来存储配置
group.id A unique string that identifies the Connect cluster group this worker belongs to. 标识此工人所属的Connect群集组的唯一字符串。
key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的键的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。
offset.storage.topic kafka topic to store connector offsets in kafka主题来存储连接器偏移量
status.storage.topic kafka topic to track connector and task status kafka主题来跟踪连接器和任务状态
value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的值的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。
internal.key.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的键的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。 此设置控制用于框架使用的内部记录数据的格式,例如配置和偏移量,因此用户通常可以使用任何正在运行的Converter实现。
internal.value.converter Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Converter类用于在Kafka Connect格式和写入Kafka的序列化格式之间进行转换。 这控制写入或读取Kafka的消息中的值的格式,并且由于它独立于连接器,因此它允许任何连接器使用任何序列化格式。 常见格式的示例包括JSON和Avro。 此设置控制用于框架使用的内部记录数据的格式,例如配置和偏移量,因此用户通常可以使用任何正在运行的Converter实现。
bootstrap.servers [localhost:9092] A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). 用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
heartbeat.interval.ms 3000 The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. 使用Kafka的组管理设施时,心跳到组协调器之间的预计时间。 心跳用于确保工作者的会话保持活动状态,并在新成员加入或离开组时方便重新平衡。 该值必须设置为低于session.timeout.ms,但通常应设置为不高于该值的1/3。 它可以调整得更低,以控制正常再平衡的预期时间。
rebalance.timeout.ms 60000 The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures. 重新平衡开始后每个工作人员加入群组的最长允许时间。 这基本上是对所有任务刷新任何挂起的数据和提交偏移量所需的时间量的限制。 如果超过超时,那么工作程序将从组中删除,这将导致偏移提交失败。
session.timeout.ms 10000 The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms. 用于检测worker失败的超时。 工作者发送定期心跳以向代理指示其活跃度。 如果代理在此会话超时期满之前未收到心跳,则代理将从组中删除工作程序并启动重新平衡。 请注意,该值必须在代理配置中通过group.min.session.timeout.ms和group.max.session.timeout.ms配置的允许范围内。
ssl.key.password null The password of the private key in the key store file. This is optional for client. 密钥存储文件中的私钥的密码。 这对于客户端是可选的。
ssl.keystore.location null The location of the key store file. This is optional for client and can be used for two-way authentication for client. 密钥存储文件的位置。 这对于客户端是可选的,并且可以用于客户端的双向认证。
ssl.keystore.password null The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.  密钥存储文件的存储密码。 这对于客户端是可选的,只有在配置了ssl.keystore.location时才需要。
ssl.truststore.location null The location of the trust store file.  信任存储文件的位置。
ssl.truststore.password null The password for the trust store file.  信任存储文件的密码。
connections.max.idle.ms 540000 Close idle connections after the number of milliseconds specified by this config. 在此配置指定的毫秒数后关闭空闲连接。
receive.buffer.bytes 32768 The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. 读取数据时使用的TCP接收缓冲区大小(SO_RCVBUF)。 如果值为-1,将使用操作系统默认值。
request.timeout.ms 40000 The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. 配置控制客户端等待请求响应的最大时间量。 如果在超时时间之前没有收到响应,客户端将在必要时重新发送请求,如果重试次数耗尽,则请求失败请求。
sasl.kerberos.service.name null The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. Kafka运行的Kerberos主体名称。 这可以在Kafka的JAAS配置或Kafka的配置中定义。
sasl.mechanism GSSAPI SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. SASL机制用于客户端连接。 这可以是安全提供者可用的任何机制。 GSSAPI是默认机制。
security.protocol PLAINTEXT Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. 用于与代理通信的协议。 有效值为:PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL。
send.buffer.bytes 131072 The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. 发送数据时使用的TCP发送缓冲区(SO_SNDBUF)的大小。 如果值为-1,将使用操作系统默认值。
ssl.enabled.protocols [TLSv1.2, TLSv1.1, TLSv1] The list of protocols enabled for SSL connections. 为SSL连接启用的协议列表。
ssl.keystore.type JKS The file format of the key store file. This is optional for client. 密钥存储文件的文件格式。 这对于客户端是可选的。
ssl.protocol TLS The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. 用于生成SSLContext的SSL协议。 默认设置为TLS,这在大多数情况下是正常的。 最近的JVM中允许的值是TLS,TLSv1.1和TLSv1.2。 旧的JVM可能支持SSL,SSLv2和SSLv3,但是由于已知的安全漏洞,它们的使用不受欢迎。
ssl.provider null The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. 用于SSL连接的安全提供程序的名称。 默认值是JVM的默认安全提供程序。
ssl.truststore.type JKS The file format of the trust store file. 信任存储文件的文件格式。
worker.sync.timeout.ms 3000 When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. 当工作程序与其他工作程序不同步并且需要重新同步配置时,请等待到这个时间量,然后放弃,离开组,并在重新加入之前等待一个后退周期。
worker.unsync.backoff.ms 300000 When the worker is out of sync with other workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining. 当工作程序与其他工作程序不同步并且未能在worker.sync.timeout.ms中赶上时,在重新加入之前,将Connect集群保留此时间很长时间。
access.control.allow.methods "" Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. 通过设置Access-Control-Allow-Methods头来设置跨源请求支持的方法。 Access-Control-Allow-Methods标头的默认值允许GET,POST和HEAD的跨源请求。
access.control.allow.origin "" Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API. 为REST API请求设置Access-Control-Allow-Origin标头的值。要启用跨源访问,请将此设置为应允许访问API的应用程序的域,或'*'以允许从任何域。 默认值仅允许从REST API的域进行访问。
client.id "" An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. 在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
metadata.max.age.ms 300000 The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. 即使我们没有看到任何分区领导更改以主动发现任何新的代理或分区,我们强制刷新元数据的时间(以毫秒为单位)。
metric.reporters [] A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. 用作度量报告器的类的列表。 实现MetricReporter接口允许插入将通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples 2 The number of samples maintained to compute metrics. 维持计算度量的样本数。
metrics.sample.window.ms 30000 The window of time a metrics sample is computed over. 计算度量样本的时间窗口。
offset.flush.interval.ms 60000 Interval at which to try committing offsets for tasks. 尝试提交任务的偏移量的间隔。
offset.flush.timeout.ms 5000 Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. 在取消过程之前等待记录刷新并将偏移数据分区为偏移存储的最大毫秒数,并恢复在未来尝试中提交的偏移数据。
reconnect.backoff.ms 50 The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker. 尝试重新连接到给定主机之前等待的时间。 这避免了在紧密环路中重复连接到主机。 此回退适用于消费者发送给代理的所有请求。
rest.advertised.host.name null If this is set, this is the hostname that will be given out to other workers to connect to. 如果设置,这是将给予其他工作人员连接到的主机名。
rest.advertised.port null If this is set, this is the port that will be given out to other workers to connect to. 如果设置,这是将给予其他工作人员连接到的端口。
rest.host.name null Hostname for the REST API. If this is set, it will only bind to this interface. REST API的主机名。 如果设置,它将只绑定到此接口。
rest.port 8083 Port for the REST API to listen on. 用于REST API的端口。
retry.backoff.ms 100 The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. 尝试对指定主题分区重试失败的请求之前等待的时间。 这避免了在一些故障情况下在紧密循环中重复发送请求。
sasl.kerberos.kinit.cmd /usr/bin/kinit Kerberos kinit command path. Kerberos kinit命令路径。
sasl.kerberos.min.time.before.relogin 60000 Login thread sleep time between refresh attempts. 登录线程在刷新尝试之间的休眠时间。
sasl.kerberos.ticket.renew.jitter 0.05 Percentage of random jitter added to the renewal time. 添加到更新时间的随机抖动的百分比。
sasl.kerberos.ticket.renew.window.factor 0.8 Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. 登录线程将睡眠,直到从上次刷新到票据到期的时间的指定窗口因子已经到达,在该时间它将尝试更新票证。
ssl.cipher.suites null A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. 密码套件列表。 这是一种命名的认证,加密,MAC和密钥交换算法的组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。
ssl.endpoint.identification.algorithm null The endpoint identification algorithm to validate server hostname using server certificate.  端点标识算法,使用服务器证书验证服务器主机名。
ssl.keymanager.algorithm SunX509 The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. 密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法。
ssl.secure.random.implementation null The SecureRandom PRNG implementation to use for SSL cryptography operations.  用于SSL加密操作的SecureRandom PRNG实现。
ssl.trustmanager.algorithm PKIX The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. 信任管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的信任管理器工厂算法。
task.shutdown.graceful.timeout.ms 5000 Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially. 等待任务正常关闭的时间。 这是总时间量,而不是每个任务。 所有任务已关闭触发,然后他们按顺序等待。
application.id An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix. 流处理应用程序的标识符。 在Kafka集群中必须是唯一的。 它用作1)默认的client-id前缀,2)用于成员资格管理的group-id,3)changelog主题前缀。
bootstrap.servers A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). 用于建立与Kafka集群的初始连接的主机/端口对的列表。 客户端将使用所有服务器,而不考虑在此指定哪些服务器进行引导 - 此列表仅影响用于发现全套服务器的初始主机。 此列表应采用host1:port1,host2:port2,...的形式。由于这些服务器仅用于初始连接以发现完整的集群成员资格(可能会动态更改),所以此列表不需要包含完整集的服务器(您可能想要多个服务器,以防万一服务器关闭)。
client.id "" An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. 在发出请求时传递给服务器的id字符串。 这样做的目的是通过允许在服务器端请求记录中包括一个逻辑应用程序名称,能够跟踪超出ip / port的请求源。
zookeeper.connect "" Zookeeper connect string for Kafka topics management. Zookeeper连接字符串用于Kafka主题管理。
key.serde class org.apache.kafka.common.serialization.Serdes$ByteArraySerde Serializer / deserializer class for key that implements the Serde interface. 用于实现Serde接口的密钥的Serializer / deserializer类。
partition.grouper class org.apache.kafka.streams.processor.DefaultPartitionGrouper Partition grouper class that implements the PartitionGrouper interface. 实现PartitionGrouper接口的分区分组器类。
replication.factor 1 The replication factor for change log topics and repartition topics created by the stream processing application. 由流处理应用程序创建的更改日志主题和重新分区主题的复制因素。
state.dir /tmp/kafka-streams Directory location for state store. 状态存储的目录位置。
timestamp.extractor class org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractor Timestamp extractor class that implements the TimestampExtractor interface. 实现TimestampExtractor接口的Timestamp提取器类。
value.serde class org.apache.kafka.common.serialization.Serdes$ByteArraySerde Serializer / deserializer class for value that implements the Serde interface. 用于实现Serde接口的值的Serializer / deserializer类。
windowstore.changelog.additional.retention.ms 86400000 Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day 添加到windows的maintainMs,以确保数据不会被过早地从日志中删除。 允许时钟漂移。 默认值为1天
application.server "" A host:port pair pointing to an embedded user defined endpoint that can be used for discovering the locations of state stores within a single KafkaStreams application 指向嵌入式用户定义端点的主机:端口对,可用于在单个KafkaStreams应用程序中发现状态存储的位置
buffered.records.per.partition 1000 The maximum number of records to buffer per partition. 每个分区缓冲的最大记录数。
cache.max.bytes.buffering 10485760 Maximum number of memory bytes to be used for buffering across all threads 要用于所有线程缓冲的最大内存字节数
commit.interval.ms 30000 The frequency with which to save the position of the processor. 保存处理器位置的频率。
metric.reporters [] A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. 用作度量报告器的类的列表。 实现MetricReporter接口允许插入将被通知新度量标准创建的类。 总是包括JmxReporter以注册JMX统计信息。
metrics.num.samples 2 The number of samples maintained to compute metrics. 维持计算度量的样本数。
metrics.sample.window.ms 30000 The window of time a metrics sample is computed over. 计算度量样本的时间窗口。
num.standby.replicas 0 The number of standby replicas for each task. 每个任务的备用副本数。
num.stream.threads 1 The number of threads to execute stream processing. 执行流处理的线程数。
poll.ms 100 The amount of time in milliseconds to block waiting for input. 阻止等待输入的时间(以毫秒为单位)。
rocksdb.config.setter null A Rocks DB config setter class that implements the RocksDBConfigSetter interface Rocks DB配置设置器类实现RocksDBConfigSetter接口
state.cleanup.delay.ms 60000 The amount of time in milliseconds to wait before deleting state when a partition has migrated. 迁移分区时删除状态之前等待的时间(以毫秒为单位)。

 
   
   

kafka 配置大全(中文,英文)相关推荐

  1. Ubuntu Server下配置UTF-8中文/英文环境

    英文有那么难看么...非要把命令提示都变成中文...算了,你要改就改吧... 有需要给Ubuntu Server装中文环境的往这看,该加sudo的自己加去,俺是root... 1.安装中文语言包 ap ...

  2. 【错误解决】Ubuntu 配置ibus中文输入法后却不能添加

    [大家好,我是编程的赛赛,专注于保姆级代码教程] 说明: 本文只针对于已经设置好ibus但是不能在Region&Language中设置的问题进行解决,下载配置ibus的方法网上有很多烦请自行查 ...

  3. 最新历史版本 :LINUX KERNEL 配置编译中文指南

    LINUX KERNEL 配置编译中文指南 序言 近几年,linux大行其道,令不满windows蓝屏的使用者跃跃欲试,结果发现linux安装不及windows方便,界面不及windows友好,配置不 ...

  4. Ubuntu 18.04 配置ibus中文拼音输入法

    18.04系统想安装中文输入法(利用ibus输入法配置)只要三步. 注意:你的Ubuntu需要可以上网!!!因为要下载一系列安装包 第一步:首先需要给Ubuntu18.04安装Chinese语言包支持 ...

  5. Spark性能调优系列:Spark参数配置大全(官网资料)

    Spark参数配置大全 Spark提供了三个位置来配置系统 Spark属性控制大多数应用程序参数,可以使用SparkConf对象或Java系统属性来设置. 通过conf/spark-env.sh每个节 ...

  6. 免费搜索引擎登陆入口大全-中文

    2007-01-21 05:17 免费搜索引擎登陆入口大全-中文 comrite搜索引擎免费登陆入口 http://www.comrite.com/addurl/submit_url.php Alex ...

  7. 第一章:pycharm、anaconda、opencv、pytorch、tensorflow、paddlex等环境配置大全总结【图像处理py版本】

    第一章:pycharm.anaconda.opencv.pytorch.tensorflow.百度飞桨 等环境配置大全总结 0 引言 一 .环境搭建 1.pycharm+anaconda安装 1.1 ...

  8. 多语言中文英文越南语拼团限时折扣会员积分商品回收商城

    源码功能介绍: 1.源码包含三种语言.登录默认英语.(中文英文越南语在首页可任意切换) 2.支持虚拟商品销售 3.后台自定义多种规格参数商品管理 4.拼团活动,自定义商品团购活动,线上营销活动更丰富. ...

  9. ASA LAB-ASA NAT配置大全

    ASA LAB-ASA NAT配置大全 两种NAT配置方式 : 1- Auto(object)NAT 2- Twice NAT NAT分类 : Static nat Dynamic nat Stati ...

最新文章

  1. 201671010128 2017-09-17《Java程序设计》之步步深入面向对象
  2. IOS第八天(1:UITableViewController团购,数据转模型,xib显示数据)
  3. java 泛型集合应用_Java泛型集合的应用和方法
  4. Method Overloading
  5. java闪屏怎么制作,Java Swing创建自定义闪屏:在闪屏下画进度条(一)
  6. Java Duration类| isNegative()方法与示例
  7. 服务器cpu_服务器CPU与GPU协同运算加速三巨头竞争
  8. 集成方法(随机森林)
  9. 详解GaussDB(for MySQL)服务:复制策略与可用性分析
  10. 产品经理如何搞定客户和业务
  11. flink sql设置并行度_Flink原理——任务调度原理
  12. 招聘 | 浙大杨杰课题组-博士后与科研助理-医学AI/NLP
  13. 用python的turtle库画圣诞树
  14. JOIN查询流程与驱动表
  15. 矩阵分解实现个性化推荐算法实践
  16. poi excel密码加密
  17. PS怎么用3D功能怎么用?如何用PS做立体字
  18. 吴莫愁公布恋情爱上哈林 演唱会庾澄庆单膝跪地似求婚
  19. [量子计算]量子计算的最佳应用(The Best Applications for Quantum Computing)
  20. flutter 顶部状态栏透明

热门文章

  1. android 电话录音保存到什么位置,手机的录音文件在哪个文件夹?不同的安卓手机存放的路径你都知道吗?...
  2. GDB实用插件(peda, gef, gdbinit)相互转换
  3. FMC——扩展外部SDRAM
  4. arduiono电子音乐代码_roblox音乐代码
  5. 西门子200PLC软件的安装和使用
  6. 单片机测钳形电流表_钳形电流表怎么测量直流电流?
  7. 图解TCP/IP——第三四章笔记
  8. GPS时间序列分析(一)
  9. 【无标题】自己也DIY充电宝,带照明功能的。
  10. 海龟股票------大跌之后最赚钱的股票