tencent cloud

Collecting TKE Kubernetes Cluster Logs
Last updated:2025-12-03 11:22:42
Collecting TKE Kubernetes Cluster Logs
Last updated: 2025-12-03 11:22:42
This document describes how to configure log collection rules for Kubernetes clusters of Tencent Kubernetes Engine (TKE) in the console and ship them to Tencent Cloud Log Service (CLS). If you need to use CRD to configure log shipment from TKE Kubernetes clusters, see TKE Scenario: Using CRD to Configure Log Collection.

Use Cases

The TKE Kubernetes business log collection feature is a tool provided for users to collect business logs, audit logs, and event logs from TKE Kubernetes clusters and send them to Tencent Cloud CLS. The log collection feature requires installing the log collection component for the cluster and configuring collection rules. After the log collection component is installed, the log collection agent runs within the cluster as a DaemonSet. Based on the collection source, CLS log topic, and log parsing method configured in the log collection rules, it will collect logs from the source and send the log content to the log consumer. You can follow the steps below to install and configure the log collection component.

Prerequisites

A cluster has been created in the TKE console. CLS supports collection from TKE standard clusters, elastic clusters, and edge clusters.

Cluster Business Log Collection

Step 1: Selecting a Cluster

1. Log in to the CLS console.
2. In the left sidebar, click Container clusters to go to the container cluster management page.
3. In the upper-right corner of the page, select TKE cluster.

4. Select the region where the TKE cluster resides and find the target collection cluster.
5. If the status of the collection component is Not installed, click Install to install the log collection component.

Note:
If the log collection component is installed in a cluster, a Pod named tke-log-agent and a pod named cls-provisioner will be deployed in the form of DaemonSet in the kube-system namespace of the cluster. Reserve at least 0.1 cores and 16 MiB of available resources for each node.
6. If the status of the collection component is Latest, click Create Collection Configuration on the right side to go to the cluster log collection configuration process page.




Step 2: Configuring a Log Topic

Go to the cluster log collection configuration process page. In the Create Log Topic step, you can select an existing log topic or create a log topic for storing logs. For more information about log topics, see Log Topic.


Step 3: Configuring Collection Rules

After selecting a log topic, click Next to go to the Collection Configuration step to configure collection rules. The configuration information is as follows:
Log Source Configuration:

Collection Rule Name: You can customize the log collection rule name.
Collection Type: Currently, the system supports the following types of collection: container standard output, container file path, and node file path.
Container Standard Output
Container File Path
Node File Path
The log collection source for container standard output can be specified in three ways: All containers, Specific workload, and Specific Pod labels.
All containers: The system collects standard output logs from all containers in the specified namespace, as shown in the following figure:

Specific workload: The system collects standard output logs from the specified container within the specified workload in the specified namespace, as shown in the following figure:

Specific pod labels: The system collects standard output logs from all containers with specified Pod labels in the specified namespace, as shown in the following figure:

Note:
The container file path cannot be a soft link. Otherwise, the actual path of the soft link will not exist in the collector's container, resulting in a log collection failure.
The log collection source for container file paths can be specified in two ways: Specific workload and Specific pod labels.
Specific workload: The system collects container file paths from the specified container within the specified workload in the specified namespace, as shown in the following figure:

Specific pod labels: The system collects container file paths from all containers with specified Pod labels in the specified namespace, as shown in the following figure:

A container file path consists of a log directory and a file name. The log directory prefix starts with /, while the file name does not. Both the prefix and file name support the use of wildcards ? and *, but commas (,) are not supported. /**/ indicates that the log collection component will listen to log files matching all levels under the specified prefix directory. Multiple file paths are in an OR relationship. For example, if the container file path is /opt/logs/*.log, you can specify the directory prefix as /opt/logs and the file name as *.log.
Note:
Only container collection components of version 1.1.12 or later support multiple collection paths.
Only collection configurations created after the container collection component is upgraded to version 1.1.12 or later support defining multiple collection paths.
After the container collection component is upgraded to version 1.1.12, the collection configurations created in versions earlier than 1.1.12 do not support configuring multiple collection paths. The collection configurations need to be recreated.
Collection Path Blocklist. After the blocklist is enabled, the specified directory paths or complete file paths can be ignored during collection. Directory paths and file paths can be matched exactly or using wildcard patterns.

The collection blocklist supports two filter types, which can be used simultaneously:
File path: In the collection path, the complete file path for the collection needs to be ignored. The wildcard * or ? is supported, and ** path fuzzy matching is supported.
Directory path: In the collection path, the directory prefix for the collection needs to be ignored. The wildcard * or ? is supported, and ** path fuzzy matching is supported.
Note:
A container log collection component of version 1.1.2 or later is required.
The collection blocklist excludes paths under the collection path. Therefore, in both file path mode and directory path mode, the specified path should be a subset of the collection path.
A node file path consists of a log directory and a file name. The log directory prefix starts with /, while the file name does not. Both the prefix and file name support the use of the wildcards ? and *, but commas (,) are not supported. /**/ indicates that the log collection component will listen to log files matching all levels under the specified prefix directory. Multiple file paths are in an OR relationship. For example, if the node file path is /opt/logs/*.log, you can specify the directory prefix as /opt/logs and the file name as *.log.
Note:
Only collection components of version 1.1.12 or later support multiple collection paths.
Only collection configurations created after the container collection component is upgraded to version 1.1.12 or later support defining multiple collection paths.
After the container collection component is upgraded to version 1.1.12, the collection configurations created in versions earlier than 1.1.12 do not support configuring multiple collection paths. The collection configurations need to be recreated.
Collection Path Blocklist. After the blocklist is enabled, the specified directory paths or complete file paths can be ignored during collection. Directory paths and file paths can be matched exactly or using wildcard patterns.

The collection blocklist supports two filter types, which can be used simultaneously:
File path: In the collection path, the complete file path for the collection needs to be ignored. The wildcard * or ? is supported, and ** path fuzzy matching is supported.
Directory path: In the collection path, the directory prefix for the collection needs to be ignored. The wildcard * or ? is supported, and ** path fuzzy matching is supported.
Note:
A container log collection component of version 1.1.2 or later is required.
The collection blocklist excludes paths under the collection path. Therefore, in both file path mode and directory path mode, the specified path should be a subset of the collection path.
Metadata configuration:



In addition to the raw log content, container- or Kubernetes-related metadata (such as the ID of the container that generated the log) is also reported to CLS to make it easier for users to trace the source when viewing logs or perform searches based on container identifiers and features (such as container names or labels). You can choose whether to report this metadata and select the metadata for upload as needed.
For container- or Kubernetes-related metadata, see the table below:
Field Name
Description
container_id
ID of the container to which the log belongs.
container_name
Name of the container to which the log belongs
image_name
Image name/IP address of the container to which the log belongs.
namespace
Namespace of the Pod to which the log belongs.
pod_uid
UID of the Pod to which the log belongs.
pod_name
Name of the Pod to which the log belongs.
pod_ip
IP address of the Pod to which the log belongs.
pod_lable_{label name}
Label of the Pod to which the log belongs. For example, if a Pod has two labels, app=NGINX and env=prod, the uploaded log will be accompanied by two metadata entries, pod_label_app:nginx and pod_label_env:prod.
Note:
If you want to collect partial Pod labels, manually enter the desired label keys. You can enter multiple keys by pressing Enter after each key. If the logs match any of the entered keys, they will be collected accordingly.
Parsing Rule Configuration:

Collection Policy. You can select All or New.
All: Full collection collects data from the beginning of the log file.
New: Incremental collection collects only the newly added content in the file.
Encode Mode: UTF-8 and GBK are supported.
Extraction Mode: Multiple types of extraction modes are supported. The details are as follows:
Single-Line Full-Text Format
Multi-line Full-Text Format
Single-Line Full Regular Expression Format
Multi-line Full Regular Expression Format
JSON Format
Delimiter
Combined Parsing
A single-line full-text log refers to a log where each line represents a complete log entry. When collecting logs, CLS uses the line break \\n as the delimiter to mark the end of each log entry. For unified structured management, each log will have a default key-value pair __CONTENT__. However, the log data itself will not be processed in a structured manner, nor will log fields be extracted. The time attribute of a log is determined by the time when the log is collected.
Assume that the raw data of a log is:
Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
The data collected into CLS is:
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
A multi-line full-text log refers to a complete piece of log data that may span multiple lines (such as Java stacktraces). In this case, using the line break \\n as the end identifier of the log seems improper. To enable the logging system to clearly distinguish between individual log entries, it uses a regular expression to match the beginning of each log entry. When a log line matches a pre-configured regular expression, it is considered the beginning of a new log entry. The log continues until another line starts that also matches the regular expression, which then marks the end of that particular log entry.
A multi-line full-text log will also have a default key-value pair __CONTENT__. However, the log data itself will not be processed in a structured manner, nor will log fields be extracted. The time attribute of a log is determined by the time when the log is collected.
Assume that the raw data of a multi-line log is:
2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:
java.lang.NullPointerException
at com.test.logging.FooFactory.createFoo(FooFactory.java:15)
at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
The first-line regular expression is as follows:
\\d{4}-\\d{2}-\\d{2}\\s\\d{2}:\\d{2}:\\d{2},\\d{3}\\s.+
The data collected into CLS is:
__CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\\njava.lang.NullPointerException\\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
The single-line full regular expression format is usually used to process structured logs. This represents a log parsing mode in which multiple key-value pairs are extracted from a complete log entry using regular expressions.
Assume that the raw data of a log is:
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
The configured regular expression is as follows:
(\\S+)[^\\[]+(\\[[^:]+:\\d+:\\d+:\\d+\\s\\S+)\\s"(\\w+)\\s(\\S+)\\s([^"]+)"\\s(\\S+)\\s(\\d+)\\s(\\d+)\\s(\\d+)\\s"([^"]+)"\\s"([^"]+)"\\s+(\\S+)\\s(\\S+).*
The data collected into CLS is:
body_bytes_sent: 9703
http_host: 127.0.0.1
http_protocol: HTTP/1.1
http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum
http_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0
remote_addr: 10.135.46.111
request_length: 782
request_method: GET
request_time: 0.354
request_url: /my/course/1
status: 200
time_local: [22/Jan/2019:19:19:30 +0800]
upstream_response_time: 0.354
Assume that the raw data of a log is:
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happened
at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
The first-line regular expression is:
\\[\\d+-\\d+-\\w+:\\d+:\\d+,\\d+]\\s\\[\\w+]\\s.*
The configured custom regular expression is:
\\[(\\d+-\\d+-\\w+:\\d+:\\d+,\\d+)\\]\\s\\[(\\w+)\\]\\s(.*)
After the system extracts the corresponding key-value pair based on the () capture group, you can customize the key name of each group as follows:
time: 2018-10-01T10:30:01,000`
level: INFO`
msg: java.lang.Exception: exception happened
at TestPrintStackTrace.f(TestPrintStackTrace.java:3)
at TestPrintStackTrace.g(TestPrintStackTrace.java:7)
at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
Assume that the raw data of a JSON log is:
{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
After being structured by CLS, the log becomes:
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0
body_sent: 23
http_host: 127.0.0.1
method: POST
referer: http://127.0.0.1/my/course/4
remote_ip: 10.135.46.111
request: POST /event/dispatch HTTP/1.1
response_code: 200
responsetime: 0.232
time_local: 22/Jan/2019:19:19:34 +0800
upstreamhost: unix:/tmp/php-cgi.sock
upstreamtime: 0.232
url: /event/dispatch
xff: -
Assume that the raw data of a log is:
10.20.20.10 - ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
When the delimiter for log parsing is specified as :::, this log will be divided into eight fields, and each of these fields will be assigned a unique key, as shown below:
IP: 10.20.20.10 -
bytes: 35
host: 127.0.0.1
length: 647
referer: http://127.0.0.1/
request: GET /online/sample HTTP/1.1
status: 200
time: [Tue Jan 22 14:49:45 CST 2019 +0800]
Assume that the raw data of a log is:
1571394459, http://127.0.0.1/my/course/4|10.135.46.111|200, status:DEAD,
The custom extension content is as follows:
{
"processors": [
{
"type": "processor_split_delimiter",
"detail": {
"Delimiter": ",",
"ExtractKeys": [ "time", "msg1","msg2"]
},
"processors": [
{
"type": "processor_timeformat",
"detail": {
"KeepSource": true,
"TimeFormat": "%s",
"SourceKey": "time"
}
},
{
"type": "processor_split_delimiter",
"detail": {
"KeepSource": false,
"Delimiter": "|",
"SourceKey": "msg1",
"ExtractKeys": [ "submsg1","submsg2","submsg3"]
},
"processors": []
},
{
"type": "processor_split_key_value",
"detail": {
"KeepSource": false,
"Delimiter": ":",
"SourceKey": "msg2"
}
}
]
}
]
}
After being structured by CLS, the log becomes:
time: 1571394459
submsg1: http://127.0.0.1/my/course/4
submsg2: 10.135.46.111
submsg3: 200
status: DEAD
Configure the Log timestamp source: You can select the Log collection time or a Specified log field as the log timestamp.
Filter: LogListener only collects logs that meet filter rules. Keys support exact matching, and filtering rules support regular expression matching. For example, you can configure the filter to only collect logs where ErrorCode is set to 404. You can enable the filter and configure rules as needed.
Configure the Upload Parsing-Failed Logs: It is recommended to enable the upload of logs that failed to be parsed. After this feature is enabled, LogListener will upload various logs that failed to be parsed. If this feature is disabled, the logs that failed to be parsed will be discarded.
Advanced Configuration: Select the advanced configuration options you need by checking the corresponding items.

In multi-line full regular expression extraction mode, the following advanced configurations are supported (only the first 2 items are supported in other modes):
Name
Description
Configuration Item
Timeout property
This configuration controls the timeout for log files. If a log file has no updates within the specified time, it is considered timed out. LogListener will stop collecting from that timed-out log file. If you have a large number of log files,
it is recommended to reduce the timeout to avoid LogListener performance waste.
No timeout: Log files never time out.
Custom: The timeout for log files can be customized.
Maximum directory levels
The configuration controls the maximum directory depth for log collection. LogListener does not collect log files in directories that exceed the specified maximum directory depth. If your target collection path includes fuzzy matching, it is recommended to configure an appropriate maximum directory depth to avoid LogListener performance waste.
An integer greater than 0. 0 means no drilling down into subdirectories.
Settings of logs with parsing and merging failure
Note:
The feature for merging logs that failed to be parsed can only be configured for LogListener 2.8.8 and later versions.
This configuration allows LogListener to merge the logs that have continuously failed to be parsed in the target log file into a single log for upload during collection. If your first-line regular expression does not cover all multi-line logs, it is recommended to enable this configuration. This helps avoid the situation where a multi-line log, which fails the first-line match, gets split into multiple individual log entries.
Enable/Disable
If you need to further process the collected CLS logs, such as structuring, masking, or filtering, before writing them into the log topic, you can click Data Processing at the bottom of the page, add data processing, and then configure the index.



Note:
For data processing-related operations, see the Preprocessing of Data tab in Creating Processing Task.
For information about writing data processing scripts, see Overview of Data Processing Functions or Practical Processing Cases.
Data processing will incur fees. For details, see Billing Overview.

Step 4: Configuring Indexes

1. Click Next to go to the Index Configuration page.
2. On the Index Configuration page, configure the following information. For configuration details, see Configuring Indexes.

Note:
Index configuration must be enabled before you can perform searches.

Step 5: Searching Logs

At this point, all deployments for collecting TKE Kubernetes cluster business logs have been completed. You can log in to the CLS console and choose Search and Analysis to view the collected logs.

Cluster Audit/Event Log Collection

Note:
Cluster audit logs record access events of kube-apiserver and sequentially record the activities of each user, administrator, or system component that affect clusters.
Cluster event logs record the operation of clusters and the scheduling of various resources.

Step 1: Selecting a Cluster

1. Log in to the CLS console.
2. In the left sidebar, choose Container Clusters to go to the container cluster management page.
3. In the upper-right corner of the page, select TKE cluster.

4. Select the region where the TKE cluster resides and find the target collection cluster.
5. If the status of the collection component is Not installed, click Install to install the log collection component.

Note:
If the log collection component is installed in a cluster, a Pod named tke-log-agent and a pod named cls-provisioner will be deployed in the form of DaemonSet in the kube-system namespace of the cluster. Reserve at least 0.1 cores and 16 MiB of available resources for each node.
6. If the status of the collection component is Latest, click the cluster name to go to the cluster details page, and find Cluster audit log or Cluster event log on the cluster details page.



7. Click to enable cluster audit logs or cluster event logs and go to the cluster audit or event log configuration process page.

Step 2: Selecting a Log Topic

Go to the audit or event log configuration process page. In the Create Log Topic step, you can select an existing log topic or create a log topic for storing logs. For more information about log topics, see Log Topics.


Step 3: Configuring Indexes

1. Click Next to go to the Index Configuration page.
2. On the Index Configuration page, configure the following information. For configuration details, see Index Configuration.

Note:
Index configuration must be enabled before you can perform searches.

Step 4: Searching Logs

At this point, all deployments for collecting TKE Kubernetes cluster audit or event business logs have been completed. You can log in to the CLS console and choose Search and Analysis to view the collected logs.

Other Operations

Managing Business Log Collection Configurations

1. On the Container Cluster Management page, find the target TKE cluster and click the cluster name to go to the cluster details page.
2. On the cluster details page, you can view and manage your cluster business log collection configurations in Cluster business log.


Upgrading the Log Collection Component

On the Container Cluster Management page, find the target TKE cluster. If the collection component status is Upgradable, click Upgrade to upgrade the log collection component to the latest version.



Uninstalling the Log Collection Component

On the Container Cluster Management page, find the target TKE cluster, click More in the operation column, and then click Uninstall Collection Component in the drop-down list.



Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback