

Parameter | Required | Description |
CKafka instance | Yes | Select the target CKafka instance. |
Kafka topics | Yes | Select one or more Kafka topics. |
Consumer group | No | When left empty, a consumer group will be automatically created using the naming convention cls-${taskid}. If specified, the designated consumer group will be used for consumption. Note: 1. If left empty, ensure the Kafka cluster has permissions to auto-create consumer groups. 2. If specified, verify the designated consumer group is not actively used by other tasks to prevent data loss. |
Start position | Yes | Earliest: Start consuming from the earliest offset. Latest: Start consuming from the latest offset. Note: The starting position can only be configured when the subscription task is created and the position cannot be modified afterward. |
Parameter | Required | Description |
Access mode | Yes | You can choose to access your self-built Kafka cluster via Private network or public network access. |
Network service type | Yes | If the access method is via Private network, you need to specify the network service type of the target self-built Kafka cluster. CVM CLB Cloud Connect Network (CCN) (currently in beta, submit a ticket if you need to use it). Direct connect gateway (currently in beta, submit a ticket if you need to use it). Note: For the differences and usage of different network service types, see Self-built Kafka Private Network Access Configuration Instructions. |
Network(VPC) | Yes | When the network service type is selected as CVM or CLB, you need to select the VPC instance where the CVM or CLB is located. |
Service Address | Yes | Enter the public IP address or domain name of the target Kafka. Note: If the Kafka protocol is used to consume logs from other log topics across regions/accounts, use the target log topic's Cross-Account Log Sync via Kafka Data Subscription. |
Private Domain Resolution | No | When Kafka brokers deployed on CVM communicate using internal domain names, you need to specify the CVM domain name and IP address for each broker here. For detailed configuration scenarios, see Configuration Instructions for Self-built Kafka Private Network Access. |
Authentication | Yes | Whether authentication is required to access the target Kafka cluster. |
Protocol | Yes | If the target Kafka cluster requires authentication to access, you need to select the authentication protocol type: plaintext sasl_plaintext sasl_ssl ssl |
Authentication mechanism | Yes | If the target Kafka cluster requires authentication to access, and the protocol type is sasl_plaintext or sasl_ssl, you need to select the authentication mechanism: PLAIN SCRAM-SHA-256 SCRAM-SHA-512 |
Username/Password | Yes | If the target Kafka cluster requires authentication to access, and the protocol type is sasl_plaintext or sasl_ssl, you need to enter the username and password required to access the target Kafka cluster. |
Client SSL Authentication | Yes | If the access protocol type for the target Kafka cluster is sasl_ssl or ssl, and client CA certificates are required for access, you need to enable this configuration and choose an existing certificate or go to SSL Certificate Service to upload the CA certificate. |
Server SSL Authentication | Yes | If the access protocol type for the target Kafka cluster is sasl_ssl or ssl, and server certificates are required for access, you need to enable this configuration and choose an existing certificate or go to SSL Certificate Service to upload the server certificate. |
Kafka topics | Yes | Enter one or more Kafka topics. Separate multiple topics with commas. |
Consumer group | No | If it is left empty, a consumer group will be automatically created with the naming convention cls-${taskid}. If it is specified, the designated consumer group will be used for consumption. Notes: If it is left empty, ensure that the Kafka cluster can automatically create a consumer group. If it is specified, ensure that the designated consumer group is not being used by other tasks, as this may cause data loss. |
Start position | Yes | Earliest: Start consuming from the earliest offset. Latest: Start consuming from the latest offset. Note: The starting position can only be configured when the subscription task is created and the position cannot be modified afterward. |
Parameter | Required | Description |
Configuration Name | Yes | The name of the Kafka data subscription configuration. |
Data extraction mode | Yes | You can choose from three extraction modes: JSON, Single-line full-text log, and Single-line full regular expression. For more details, see Data Extraction Mode. |
Log Sample | Yes | If the data extraction mode is set to single-line full regular expression, you need to manually enter or automatically obtain a log sample to validate the regular expression and extract key-value pairs. |
Regular Expression | Yes | If the data extraction mode is set to single-line full regular expression, you need to manually enter or automatically generate a regular expression. The system will validate and extract key-value pairs based on the regular expression you provide. For detailed instructions on how to automatically generate a regular expression, see Automatically Generating Regular Expressions. |
Log Extraction Result | Yes | If the data extraction mode is set to single-line full regular expression, you need to configure or modify the field names extracted based on the regular expression. |
Manual Verification | No | If the data extraction mode is set to single-line full regular expression, you can optionally provide one or more log samples to validate the correctness of the regular expression. |
Upload Parsing-Failed Logs | Yes | If the data extraction mode is set to JSON or single-line full regular expression, and if uploading parsing-failed logs is enabled, LogListener will upload the logs where parsing fails. If it is disabled, the failed logs will be discarded. |
Key Name of Parsing-Failed Logs | Yes | If uploading parsing-failed logs is enabled, you can specify a field name as the Key, and the logs that fail to be parsed will be uploaded as the Value of the specified field. |
Encoding format | Yes | Based on your logs, you can choose from the following two encoding formats: UTF-8 GBK |
Use default time | Yes | When it is enabled, the system will use the current system time or the Kafka message timestamp as the log timestamp. When it is disabled, the timestamp from the log's time field will be used. |
Default Time Source | Yes | When Use default time is enabled, you can choose from the following two default events as the log timestamp: Current system time Kafka message timestamp |
Time field | Yes | When Use default time is disabled, and the data extraction mode is JSON or regex, you can specify the field name in the log that represents the time. The value of this field will be used as the log's timestamp. |
Time extraction regex | Yes | When Use default time is disabled, and the data extraction mode is single-line full-text, you can define the field that represents the time in the log using a regular expression. Note: If the regular expression matches multiple fields, the first one will be used. Example: If the original log is message with time 2022-08-08 14:20:20, you can set the time extraction regex as \\d\\d\\d\\d-\\d\\d-\\d\\d \\d\\d:\\d\\d:\\d\\d |
Time field format | Yes | When Use default time is disabled and the time field in the log is confirmed, you need to further specify the time format to parse the value of the time field. For more details, see Configure Time Format. |
Time zone of the time field | Yes | When Use default time is disabled and the time field and format in the log are confirmed, you need to choose between the following two time zone standards: UTC (Coordinated Universal Time) GMT (Greenwich Mean Time) |
Time used when the parsing failed | Yes | When Use default time is disabled, if the time extraction regex or time field format parsing fails, users can choose between the following two default times as the log timestamp: Current system time Kafka message timestamp |
Filter | No | The purpose of the filter is to add log collection filtering rules based on business needs, helping you filter out valuable log data. The following filtering rules are supported: Equal to: Only collect logs with specified field values matching the specified characters. Exact or regular matching is supported. Not equal to: Only collect logs whose specified field values do not match the specified characters. Exact or regular matching is supported. Field exists: Only logs where the specified field exists are collected. Field does not exist: Only logs in which the specified field does not exist are collected. For example, if you want all log data with response_code of 400 or 500 in the original JSON format log content to be collected, then configure response_code at key, select equals as filtering rule, and configure 400|500 at value. Note: The relationship between multiple filter conditions is an and logic. If multiple filter conditions are configured for the same key name, the rules will be overwritten. |
Kafka metadata | No | The following 4 types of Kafka-related metadata are supported for selection to be uploaded along with the logs: kafka_topic kafka_partition kafka_offset kafka_timestamp Note: If there are fields in the original log with the same name as the above metadata, they will be overwritten. |

error.Configuration Item | Feature Description |
Full-Text Delimiter | A set of characters that split the field value into segments. Only English symbols are supported. The default separator on the console is @&? |#()='",;:<>[]{}/ \\n\\t\\r\\\\. |
Case sensitive | Whether it is case-sensitive during retrieval. For example, if the log is Error and case-sensitive, it cannot be retrieved with error. |
Allow Chinese Characters | Enable this feature when the log includes Chinese and needs to be retrieved. For example, if the log is "User log-in API timeout", without enabling this feature, the log cannot be retrieved by searching "Timeout". The log can only be retrieved by completely searching "User log-in API timeout". After this feature is enabled, the log can be retrieved by searching "Timeout". |
level:error AND timeCost:>1000. Some logs also contain a special type of metadata field, and the index configuration for these fields is the same as for regular fields.Configuration Item | Feature Description |
Field Name | The field name. A single log topic key-value index can have up to 300 fields. Only letters, digits, underscores, and -./@ are supported, and the field name cannot start with an underscore. |
Field Type | The data types of the field include text, long and double. The text type supports fuzzy retrieval using wildcards and does not support range comparison. The long and double types support range comparison, but do not support fuzzy retrieval. |
Delimiter | Character set for word segmentation of field values. Only English symbols are supported. The default word separator on the console is @&? |#()='",;:<>[]{}/ \\n\\t\\r\\\\. |
Chinese Characters | This feature can be enabled when the field includes Chinese and you need to retrieve it. For example, the log is "message: User log-in API timeout", if the feature is not enabled, use message: "Timeout" could not retrieve the log, only using message: "User log-in API timeout" can retrieve the log. After enabling this feature, you can use message: "Timeout" to retrieve the log. |
Statistics | If this parameter is enabled, you can use SQL to analyze this field. When the text type field is enabled for statistics, if the value is too long, only the first 32766 characters are involved in statistical calculations. Enabling statistics will not incur additional fees. It is recommended that you enable it. |
Case Sensitivity | Specifies whether the retrieval is case-sensitive. For example, if the log is level:Error and case sensitivity is enabled, retrieving with level:error will not work. |





{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
(\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
listener.security.protocol.map=CVM:PLAINTEXTlisteners=CVM://10.0.0.2:9092advertised.listeners=CVM://10.0.0.2:9092

listener.security.protocol.map=CLB:PLAINTEXTlisteners=CLB://10.0.0.2:29092advertised.listeners=CLB://10.0.0.12:29092
listener.security.protocol.map=DOMAIN:PLAINTEXTlisteners=DOMAIN://10.0.0.2:9092advertised.listeners=DOMAIN://broker1.cls.tencent.com:9092
Feedback