# The maximum size of the message body, in bytesmessage.max.bytes=1000012# Whether to allow automatic creation of Topic, default is false, currently can be managed and operated through the console or TencentCloud API creationauto.create.topics.enable=false# Whether to allow calling an API to delete a Topicdelete.topic.enable=true# The maximum request size allowed by the Broker is 16 MB.socket.request.max.bytes=16777216# Each IP can establish up to 5000 connections with the Broker.max.connections.per.ip=5000# offset retention time, default is 7 days.offsets.retention.minutes=10080# If there is no ACL setting, anyone is allowed to access.allow.everyone.if.no.acl.found=true# Shard size of logs is 1 GB.log.segment.bytes=1073741824# Log rolling check interval is 5 minutes. You may need to wait for 5 minutes to clear the log when the set retention time is less than 5 minutes.log.retention.check.interval.ms=300000
Num = max( T/PT , T/CT ) = T / min( PT , CT )
# Max Message Size at Topic Levelmax.message.bytes=1000012# The message format of version 0.10.2 is V1 format.message.format.version=0.10.2-IV0# A replica not in the ISR can be selected as the Leader. Availability is higher than reliability, and there is a data loss risk.unclean.leader.election.enable=true# Minimum number of replicas for the ISR to submit producer requests. If the number of replicas in sync status is less than this value, the server will no longer accept write requests with request.required.acks being -1 or all.min.insync.replicas=1
# The producer will attempt to package messages sent to the same Partition into a single batch and send them to the Broker. batch.size sets the upper limit of the batch size. The default is 16KB. Setting batch.size too small will cause throughput to decrease, while setting it too large will cause excessive memory usage.batch.size=16384# There are 3 mechanisms for the ack of Kafka producer, as follows:# -1 or all: The Broker responds to the Producer to continue sending the next (batch of) message(s) only after the leader receives the data and synchronizes it with all followers in ISRs. This configuration provides the highest data reliability. No message will be lost as long as a synchronized replica is alive. Note: This configuration cannot ensure that all replicas read and write the data before returning. It can be used in conjunction with the Topic Level parameter min.insync.replicas.# 0: The producer does not wait for broker confirmation of synchronization completion and continues sending the next (batch of) message(s). This configuration provides the highest production performance, but the lowest data reliability (data may be lost when the server fails. If the leader is dead but the producer is unaware of that, the broker does not receive the message).# 1: The producer sends the next (batch of) message(s) after the leader has successfully received the data and confirmed it. This configuration is a trade-off between production throughput and data reliability (messages may be lost if the leader is dead but not yet replicated).# The default value is 1 if the user does not configure explicitly. Set according to their business situation.acks=1# Control the maximum time for a production request to wait in the Broker for replica synchronization to meet the conditions set by ackstimeout.ms=30000# Configure the memory used by the producer to cache messages waiting to be sent to the Broker. Users should adjust it according to the total memory size of the producer's process.buffer.memory=33554432# When the speed of producing messages is faster than the Sender thread sending to the Broker, causing the memory configured by buffer.memory to be used up, it will block the producer's send operation. This parameter sets the maximum blocking time.max.block.ms=60000# Set the time (ms) for delayed message sending, so that more messages can be composed into a batch for sending. The default value is 0, which means send immediately. When the messages to be sent reach the size set by batch.size, the request will be sent immediately regardless of whether the time set by linger.ms has been reached.# It is recommended that users set linger.ms between 100 and 1000 according to actual use scenes. A larger value relatively has larger throughput but will correspondingly increase latency.linger.ms=100# Set the amount of cached messages in the partition (bytes). When this value is reached, the producer will send batched messages to the broker. The default value is 16384. A too-small batch.size will increase the request count, which may degrade performance and impact stability. Users can appropriately increase this value according to the actual scenario. Note: This value is the upper limit. If the time has reached linger.ms before reaching this value, the producer will send the message.batch.size=16384# The upper limit of the request packet size that the producer can send. The default is 1 MB. Note that when modifying this value, it must not exceed the packet size upper limit of 16 MB configured by the Broker.max.request.size=1048576# Compression format configuration. Currently, compression is not allowed to use in versions 0.9 and below. GZip compression is not allowed to use in versions 0.10 and above.compression.type=[none, snappy, lz4]# The request timeout for the client to send to the Broker. It cannot be less than the replica.lag.time.max.ms configured by the Broker. Currently, this value is 10000 ms.request.timeout.ms=30000# The maximum number of unacknowledged requests that the client can send on each connection. This parameter may cause data out of order when it is greater than 1 and retries is greater than 0. When you hope the messages are strictly ordered, it is recommended that customers set this value to 1.max.in.flight.requests.per.connection=5# Number of retries when an error occurs in a request. It is recommended to set this value to more than 0 to maximize message delivery during failed retries.retries=0# Time between failed request transmission and next retry requestretry.backoff.ms=100
# Whether to synchronize the offset to the Broker after consuming a message, so that the latest offset can be obtained from the Broker when the Consumer failsenable.auto.commit=true# The sampling interval for automatically committing the Offset when auto.commit.enable=true. It is recommended to set it to at least 1000.auto.commit.interval.ms=5000# How to initialize the offset when there is no offset on the Broker (such as during the first consumption or when the offset expires after 7 days). How to reset the Offset when the OFFSET_OUT_OF_RANGE error is received.# earliest: means automatic reset to the minimum offset of the partition# latest: The default is latest, which means automatic reset to the maximum offset of the partition.# none: Do not automatically reset the offset. Throw an OffsetOutOfRangeException.auto.offset.reset=latest# Identify the consumption group to which the consumer belongsgroup.id=""# Consumer timeout period when using Kafka consumer grouping mechanism. When the Broker fails to receive the consumer's heartbeat within this period, the consumer is considered to have failed, and Broker initiates Rebalance process. Currently, this value configuration must be between group.min.session.timeout.ms=6000 and group.max.session.timeout.ms=300000 in the Broker configuration.session.timeout.ms=10000# Interval for consumers to send heartbeats when using Kafka consumer grouping mechanism. This value must be less than session.timeout.ms, generally less than one-third of it.heartbeat.interval.ms=3000# The maximum interval allowed for subsequent calls to poll when using Kafka consumer grouping mechanism. If poll is not called again within this time, the consumer is considered to have failed, and Broker will initiate Rebalance to assign the partition assigned to it to other consumers.max.poll.interval.ms=300000# Minimum size of returned data for a fetch request. By default, it is set to 1B, indicating that the request can return as soon as possible. Increasing this value will increase throughput and also increase delay.fetch.min.bytes=1# The maximum size of returned data for a Fetch request. By default, it is set to 50 MB.fetch.max.bytes=52428800# Request waiting time for a Fetch requestfetch.max.wait.ms=500# The maximum size of data returned by each partition for a Fetch request. The default is 1 MB.max.partition.fetch.bytes=1048576# Number of records returned in one poll callmax.poll.records=500# Client request timeout. If no response is received after this period, the request times out and fails.request.timeout.ms=305000
Feedback