Copyright Notice
©2013-2025 Tencent Cloud. All rights reserved.
Copyright in this document is exclusively owned by Tencent Cloud. You must not reproduce, modify, copy or distribute in any way, in whole or in part, the contents of this document without Tencent Cloud's the prior written consent.
Trademark Notice
All trademarks associated with Tencent Cloud and its services are owned by the Tencent corporate group, including its parent, subsidiaries and affiliated companies, as the case may be. Trademarks of third parties referred to in this document are owned by their respective proprietors.
Service Statement
This document is intended to provide users with general information about Tencent Cloud's products and services only and does not form part of Tencent Cloud's terms and conditions. Tencent Cloud's products or services are subject to change. Specific products and services and the standards applicable to them are exclusively provided for in Tencent Cloud's applicable terms and conditions.
Last updated:2024-01-20 16:42:10
Last updated:2024-01-20 16:42:10
nginx.cls_test.1. For more information on topic partitions, see Topic Partition.Last updated:2024-09-20 17:48:27
Last updated:2024-05-31 14:41:27
Metric | Meaning | Unit | Dimension |
Write traffic | Traffic incurred during log upload | MB | Log topic ID |
Index traffic | Traffic incurred after the index feature is enabled | MB | Log topic ID |
Private network read traffic | Private network read traffic incurred when a log is downloaded, consumed, or shipped via a private network | MB | Log topic ID |
Public network read traffic | Public network read traffic incurred when a log is downloaded, consumed, or shipped via a public network | MB | Log topic ID |
Read traffic | Total private and public network read traffic | MB | Log topic ID |
Metric | Meaning | Unit | Dimension |
Log storage capacity | Storage capacity occupied by log data | MB | Log topic ID |
Index storage capacity | Storage capacity occupied by index data | MB | Log topic ID |
Storage Capacity | Total storage capacity occupied by log and index data | MB | Log topic ID |
Metric | Meaning | Unit | Dimension |
Service requests | Number of requests that users use LogListener, API, or SDK to call CLS APIs, including read and write requests such as upload and download, creation and deletion, and search and analysis | COUNT | Log topic ID |
Last updated:2025-11-13 16:45:27
cls_test for example.cls_test for example/etc/loglistener.conf file under the installation directory of LogListener.
Take the /usr/local installation directory as an example:vi /usr/local/loglistener-2.3.0/etc/loglistener.conf
group_label parameter, and enter your custom machine IDs and separate them with a comma (,).

/etc/init.d/loglistenerd restart

Last updated:2024-01-20 16:56:41
Last updated:2024-12-03 14:57:20
Resource Type | Resource Description Method in Access Policies | Authorization by Tag |
Logset | qcs::cls:$region:$account:logset/*qcs::cls:$region:$account:logset/$logsetId | Supported |
Log topic | qcs::cls:$region:$account:topic/*qcs::cls:$region:$account:topic/$topicId | Supported |
Machine group | qcs::cvm:$region:$account:machinegroup/*qcs::cvm:$region:$account:machinegroup/$machinegroupId | Supported |
Collection configuration | qcs::cls:$region:$account:config/*qcs::cls:$region:$account:config/$configId | Not supported |
Dashboard | qcs::cls:$region:$account:dashboard/*qcs::cls:$region:$account:dashboard/$dashboardId | Supported |
Alarm policy | qcs::cls:$region:$account:alarm/*qcs::cls:$region:$account:alarm/$alarmId | Not supported |
Notification channel group | qcs::cls:$region:$account:alarmNotice/*qcs::cls:$region:$account:alarmNotice/$alarmNoticeId | Not supported |
Data processing task | qcs::cls:$region:uin/$account:datatransform/*qcs::cls:$region:uin/$account:datatransform/$TaskId | Not supported |
Shipping task (COS) | qcs::cls:$region:$account:shipper/*qcs::cls:$region:$account:shipper/$shipperId | Not supported |
Other resource types (disused; used by APIs of earlier versions only) | Single chart in the dashboard: qcs::cls:$region:$account:chart/*qcs::cls:$region:$account:chart/$chartId | Not supported |
$region and $account to your actual parameter information.*.*, indicating all resources. To avoid misoperations by ordinary users, you can configure read-only permissions for ordinary users and management permissions for admins. For example, you can assign admins the management permissions on all data processing tasks and assign ordinary users the read-only permissions on all data processing tasks.Last updated:2025-11-18 11:23:06
Module | Application Scenario |
Overall Operation (Best Practices) | Classify topics, machine groups, and dashboards by using tags, and configure permissions by tag: |
Data collection | |
Topic management and search/analysis | Viewing/Managing Topics and Performing Search/Analysis Using APIs to Perform Search and Analysis |
Dashboard | |
Monitoring alarm | |
Data Processing | Data Processing Performing Scheduled SQL Analysis |
Data shipping and consumption | Shipping to CKafka Shipping to COS Shipping to DLC Shipping to Splunk Shipping to SCF Kafka Protocol Consumption Shipping Metric Topics Custom Consumption |
Independent DataSight console | Manage DataSight consoles: |
Developer | Using CLS Through Grafana |
{"statement": [{"action": [ //Required read-only permission for related products"monitor:GetMonitorData","monitor:DescribeBaseMetrics","cam:ListGroups","cam:GetGroup","cam:DescribeSubAccountContacts","cam:ListAttachedRolePolicies","cam:GetRole","vpc:DescribeSubnetEx",//Required for creating DataSight consoles accessed via the private network"vpc:DescribeVpcEx",//Required for creating DataSight consoles accessed via the private network"tag:TagResources","tag:DescribeResourceTagsByResourceIds","tag:GetTags","tag:GetTagKeys","tag:GetTagValues","kms:GetServiceStatus"],"effect": "allow","resource": "*"},{"action": [ //Specify that tags such as testCAM:test1 are required for creating dashboards, logsets, topics, alarm policies, notification channel groups, machine groups, and DataSight consoles. Tags are not supported for creating other types of resources."cls:CreateDashboard","cls:CreateLogset","cls:CreateTopic","cls:CreateAlarm","cls:CreateAlarmNotice","cls:CreateMachineGroup","cls:CreateConsole"],"condition": {"for_any_value:string_equal": {"qcs:request_tag": ["testCAM&test1"]}},"effect": "allow","resource": "*"},{"action": [ //Grant permission on all related APIs if tags are specified for resources. (APIs should support permission control by tag.)"cls:*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["testCAM&test1"]}},"effect": "allow","resource": "*"},{"action": [ //Some APIs do not support permission control by tag or resource scope limit. Most of the APIs below involve read operations, while some APIs of auxiliary features involve write operations. All these APIs do not affect the core data security of products."cls:CheckAlarmChannel","cls:CheckAlarmRule","cls:CheckDomainRepeat","cls:CheckFunction","cls:CheckRechargeKafkaServer","cls:DescribeClsPrePayDetails","cls:DescribeClsPrePayInfos","cls:DescribeConfigMachineGroups","cls:DescribeConfigs","cls:DescribeAgentConfigs","cls:DescribeTopicExtendConfig","cls:DescribeDataTransformFailLogInfo","cls:DescribeDataTransformInfo","cls:DescribeDataTransformPreviewDataInfo","cls:DescribeDataTransformPreviewInfo","cls:DescribeDataTransformProcessInfo","cls:DescribeDemonstrations","cls:DescribeExceptionResources","cls:DescribeExternalDataSourcePreview","cls:DescribeFunctions","cls:DescribeResources","cls:DescribeShipperPreview","cls:DescribeScheduledSqlProcessInfo","cls:DescribeConfigurationTemplates","cls:DescribeFolders","cls:GetClsService","cls:GetConfigurationTemplateApplyLog","cls:PreviewKafkaRecharge","cls:agentHeartBeat","cls:CreateDemonstrations","cls:DeleteDemonstrations","cls:DescribeNoticeContents","cls:DescribeWebCallbacks"],"effect": "allow","resource": "*"},{"action": [ //Some APIs do not support permission control by tag or resource scope limit. The APIs below involve write operations of core features. It is recommended to grant permissions only to certain users as required. APIs require no permission grants can be deleted."cls:RealtimeProducer", //Upload data by using Kafka"cls:CreateConfigurationTemplate", //Configuration template API"cls:ModifyConfigurationTemplate","cls:DeleteConfigurationTemplate","cls:CreateFolder", //Folder API"cls:ModifyFolder","cls:DeleteFolder","cls:ModifyResourceAndFolderRelation","cls:CreateDataTransform",//Data processing API"cls:ModifyDataTransform","cls:DeleteDataTransform","cls:RetryShipperTask",//COS shipping API"cls:ModifyDashboardSubscribeAck",//Dashboard subscription API"cls:DeleteDashboardSubscribe","cls:ModifyConfigExtra",//Collection configuration API"cls:DeleteConfigExtra","cls:RemoveMachine",//Machine group API"cls:UpgradeAgentNormal","cls:CreateNoticeContent",//API related to alarm notification templates"cls:DeleteNoticeContent","cls:ModifyNoticeContent","cls:CreateWebCallback",//API related to alarm integration configuration"cls:ModifyWebCallback","cls:DeleteWebCallback"],"effect": "allow","resource": "*"}],"version": "2.0"}
{"statement": [{"action": [ //Required read-only permission for related products"monitor:GetMonitorData","monitor:DescribeBaseMetrics","cam:ListGroups","cam:GetGroup","cam:DescribeSubAccountContacts","cam:ListAttachedRolePolicies","tag:DescribeResourceTagsByResourceIds","tag:GetTags","tag:GetTagKeys","tag:GetTagValues"],"effect": "allow","resource": "*"},{"action": [ //Grant read-only permission on related APIs if tags are specified for resources."cls:DescribeConsumer","cls:DescribeConsumerPreview","cls:DescribeCosRecharges","cls:DescribeDashboardSubscribes","cls:DescribeDashboards","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribeKafkaConsume","cls:DescribeKafkaConsumer","cls:DescribeKafkaRecharges","cls:DescribeLatestJsonLog","cls:DescribeLatestUserLog","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLogHistogram","cls:DescribeMachineGroupConfigs","cls:DescribeMachines","cls:DescribePartitions","cls:DescribeScheduledSqlInfo","cls:DescribeScheduledSqlProcessInfo","cls:DescribeShipperPreview","cls:DescribeTopics","cls:EstimateRebuildIndexTask","cls:GetAlarm","cls:GetAlarmLog","cls:GetMetricLabelValues","cls:GetMetricSeries","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryExemplars","cls:MetricsQueryRange","cls:MetricsSeries","cls:QueryMetric","cls:QueryRangeMetric","cls:SearchCosRechargeInfo","cls:SearchDashboardSubscribe","cls:SearchLog","cls:DescribeAlarmNotices","cls:DescribeAlarms","cls:DescribeAlertRecordHistory","cls:DescribeExternalDataSources","cls:DescribeLogsets","cls:DescribeMachineGroups","cls:DescribeConsoles","cls:DescribeShipperTasks","cls:DescribeShippers","cls:DescribeRebuildIndexTasks"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["testCAM&test1"]}},"effect": "allow","resource": "*"},{"action": [ //Some APIs do not support permission control by tag or resource scope limit. Most of the APIs below involve read operations, while some APIs of auxiliary features involve write operations. All these APIs do not affect the core data security of products."cls:CheckAlarmChannel","cls:CheckAlarmRule","cls:CheckDomainRepeat","cls:CheckFunction","cls:CheckRechargeKafkaServer","cls:DescribeClsPrePayDetails","cls:DescribeClsPrePayInfos","cls:DescribeConfigMachineGroups","cls:DescribeConfigs","cls:DescribeAgentConfigs","cls:DescribeTopicExtendConfig","cls:DescribeDataTransformFailLogInfo","cls:DescribeDataTransformInfo","cls:DescribeDataTransformPreviewDataInfo","cls:DescribeDataTransformPreviewInfo","cls:DescribeDataTransformProcessInfo","cls:DescribeDemonstrations","cls:DescribeExceptionResources","cls:DescribeExternalDataSourcePreview","cls:DescribeFunctions","cls:DescribeResources","cls:DescribeShipperPreview","cls:DescribeScheduledSqlProcessInfo","cls:DescribeConfigurationTemplates","cls:DescribeFolders","cls:GetClsService","cls:GetConfigurationTemplateApplyLog","cls:PreviewKafkaRecharge","cls:CreateDemonstrations","cls:DeleteDemonstrations","cls:CreateExport","cls:DeleteExport","cls:DescribeNoticeContents","cls:DescribeWebCallbacks"],"effect": "allow","resource": "*"}],"version": "2.0"}
{"version": "2.0","statement": [{"action": ["cls:pushLog","cls:getConfig","cls:agentHeartBeat"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:MetricsRemoteWrite"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"action": ["cls:pushLog","cls:agentHeartBeat","cls:getConfig","cls:CreateConfig","cls:DeleteConfig","cls:ModifyConfig","cls:DescribeConfigs","cls:DescribeMachineGroupConfigs","cls:DeleteConfigFromMachineGroup","cls:ApplyConfigToMachineGroup","cls:DescribeConfigMachineGroups","cls:ModifyTopic","cls:DeleteTopic","cls:CreateTopic","cls:DescribeTopics","cls:CreateLogset","cls:DeleteLogset","cls:DescribeLogsets","cls:CreateIndex","cls:ModifyIndex","cls:CreateMachineGroup","cls:DeleteMachineGroup","cls:DescribeMachineGroups","cls:ModifyMachineGroup","cls:CreateConfigExtra","cls:DeleteConfigExtra","cls:ModifyConfigExtra"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:pushLog","cls:UploadLog","cls:MetricsRemoteWrite"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:RealtimeProducer"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:CreateMetricSubscribe","cls:DescribeMetricCorrectDimension","cls:DescribeMetricSubscribePreview","monitor:DescribeBaseMetrics","monitor:DescribeProductList"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:CreateBinlogSubscribe","cls:DescribeBinlogSubscribes","cls:ModifyBinlogSubscribe","cls:DescribeBinlogSubscribeConnectivity","cls:DescribeBinlogSubscribePreview",],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:PreviewKafkaRecharge","cls:CreateKafkaRecharge","cls:ModifyKafkaRecharge",],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:pushLog",],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:pushLog",],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:CreateConfig","cls:CreateConfig","cls:DeleteConfig","cls:DescribeConfigs","cls:ModifyConfig","cls:CreateConfigExtra","cls:DeleteConfigExtra","cls:ModifyConfigExtra","cls:CreateMachineGroup","cls:DeleteMachineGroup","cls:DescribeMachineGroups","cls:DeleteConfigFromMachineGroup","cls:ApplyConfigToMachineGroup","cls:ModifyMachineGroup"],"resource": "*","effect": "allow"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateLogset","cls:CreateTopic","cls:CreateExport","cls:CreateIndex","cls:DeleteLogset","cls:DeleteTopic","cls:DeleteExport","cls:DeleteIndex","cls:ModifyLogset","cls:ModifyTopic","cls:ModifyIndex","cls:MergePartition","cls:SplitPartition","cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:CreateRebuildIndexTask","cls:EstimateRebuildIndexTask","cls:CancelRebuildIndexTask","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateLogset","cls:CreateTopic","cls:CreateExport","cls:CreateIndex","cls:DeleteLogset","cls:DeleteTopic","cls:DeleteExport","cls:DeleteIndex","cls:ModifyLogset","cls:ModifyTopic","cls:ModifyIndex","cls:MergePartition","cls:SplitPartition","cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:CreateRebuildIndexTask","cls:EstimateRebuildIndexTask","cls:CancelRebuildIndexTask","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["qcs::cls:ap-guangzhou:100007*827:logset/1c012db7-2cfd-4418-**-7342c7a42516","qcs::cls:ap-guangzhou:100007*827:topic/380fe1f1-0c7b-4b0d-**-d514959db1bb"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateLogset","cls:CreateTopic","cls:CreateExport","cls:CreateIndex","cls:DeleteLogset","cls:DeleteTopic","cls:DeleteExport","cls:DeleteIndex","cls:ModifyLogset","cls:ModifyTopic","cls:ModifyIndex","cls:MergePartition","cls:SplitPartition","cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:CreateRebuildIndexTask","cls:EstimateRebuildIndexTask","cls:CancelRebuildIndexTask","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["testCAM&test1"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["qcs::cls:ap-guangzhou:100007*827:logset/1c012db7-2cfd-4418-**-7342c7a42516","qcs::cls:ap-guangzhou:100007*827:topic/380fe1f1-0c7b-4b0d-**-d514959db1bb"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeExports","cls:DescribeIndex","cls:DescribeIndexs","cls:DescribePartitions","cls:SearchLog","cls:DescribeLogHistogram","cls:DescribeLogContext","cls:DescribeLogFastAnalysis","cls:DescribeLatestJsonLog","cls:DescribeRebuildIndexTasks","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["testCAM&test1"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:SearchLog","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries","cls:MetricsRemoteRead"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:SearchLog","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries","cls:MetricsRemoteRead"],"resource": ["qcs::cls:ap-guangzhou:100007*827:logset/1c012db7-2cfd-4418-**-7342c7a42516","qcs::cls:ap-guangzhou:100007*827:topic/380fe1f1-0c7b-4b0d-**-d514959db1bb"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:SearchLog","cls:MetricsLabelValues","cls:MetricsLabels","cls:MetricsQuery","cls:MetricsQueryRange","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries","cls:MetricsRemoteRead"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["testCAM&test1"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:CreateChart","cls:CreateDashboard","cls:DeleteChart","cls:DeleteDashboard","cls:ModifyChart","cls:ModifyDashboard","cls:DescribeDashboards","cls:CreateFolder","cls:DeleteFolder","cls:DescribeFolders","cls:ModifyFolder","cls:ModifyResourceAndFolderRelation","cls:SearchDashboardSubscribe","cls:CreateDashboardSubscribe","cls:ModifyDashboardSubscribe","cls:DescribeDashboardSubscribes","cls:DeleteDashboardSubscribe","cls:ModifyDashboardSubscribeAck"],"resource": "*"},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:CreateChart","cls:CreateDashboard","cls:DeleteChart","cls:DeleteDashboard","cls:ModifyChart","cls:ModifyDashboard","cls:DescribeDashboards","cls:CreateFolder","cls:DeleteFolder","cls:DescribeFolders","cls:ModifyFolder","cls:ModifyResourceAndFolderRelation","cls:SearchDashboardSubscribe","cls:CreateDashboardSubscribe","cls:ModifyDashboardSubscribe","cls:DescribeDashboardSubscribes","cls:DeleteDashboardSubscribe","cls:ModifyDashboardSubscribeAck"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:CreateChart","cls:CreateDashboard","cls:DeleteChart","cls:DeleteDashboard","cls:ModifyChart","cls:ModifyDashboard","cls:DescribeDashboards","cls:CreateFolder","cls:DeleteFolder","cls:DescribeFolders","cls:ModifyFolder","cls:ModifyResourceAndFolderRelation","cls:SearchDashboardSubscribe","cls:CreateDashboardSubscribe","cls:ModifyDashboardSubscribe","cls:DescribeDashboardSubscribes","cls:DeleteDashboardSubscribe","cls:ModifyDashboardSubscribeAck"],"resource": ["qcs::cls::uin/100000*001:dashboard/dashboard-0769a3ba-2514-409d-**-f65b20b23736"]},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["qcs::cls::uin/100000*001:topic/174ca473-50d0-4fdf-**-2ef681a1e02a"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:DescribeDashboards","cls:DescribeFolders","cls:SearchDashboardSubscribe","cls:DescribeDashboardSubscribes"],"resource": "*"},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:DescribeDashboards","cls:DescribeFolders","cls:SearchDashboardSubscribe","cls:DescribeDashboardSubscribes"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:GetChart","cls:GetDashboard","cls:ListChart","cls:DescribeDashboards","cls:DescribeFolders","cls:SearchDashboardSubscribe","cls:DescribeDashboardSubscribes"],"resource": ["qcs::cls::uin/100000*001:dashboard/dashboard-0769a3ba-2514-409d-**-f65b20b23736"]},{"effect": "allow","action": ["cls:SearchLog","cls:DescribeTopics","cls:DescribeLogFastAnalysis","cls:DescribeIndex","cls:DescribeLogsets","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["qcs::cls::uin/100000*001:topic/174ca473-50d0-4fdf-**-2ef681a1e02a"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:SearchLog","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:CreateAlarm","cls:ModifyAlarm","cls:DeleteAlarm","cls:DescribeAlarmNotices","cls:CreateAlarmNotice","cls:ModifyAlarmNotice","cls:DeleteAlarmNotice","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup","cls:GetAlarmLog","cls:DescribeAlertRecordHistory","cls:CheckAlarmRule","cls:CheckAlarmChannel"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:SearchLog","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup","cls:CheckAlarmRule","cls:CheckAlarmChannel","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:ModifyAlarm","cls:DeleteAlarm","cls:DescribeAlarmNotices","cls:ModifyAlarmNotice","cls:DeleteAlarmNotice","cls:GetAlarmLog","cls:DescribeAlertRecordHistory"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:SearchLog","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup","cls:CheckAlarmRule","cls:CheckAlarmChannel","cls:GetMetricLabelValues","cls:QueryMetric","cls:QueryRangeMetric","cls:GetMetricSeries"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:ModifyAlarm","cls:DeleteAlarm","cls:DescribeAlarmNotices","cls:ModifyAlarmNotice","cls:DeleteAlarmNotice","cls:GetAlarmLog","cls:DescribeAlertRecordHistory"],"resource": ["qcs::cls:ap-guangzhou:100007***827:alarm/alarm-xxx-9bbe-4625-ac29-b5e66bf643cf","qcs::cls:ap-guangzhou:100007***827:alarmNotice/notice-xxx-ec2c-410f-924f-4ee8a7cd028e"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:DescribeAlarmNotices","cls:GetAlarmLog","cls:DescribeAlertRecordHistory","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:DescribeAlarmNotices","cls:GetAlarmLog","cls:DescribeAlertRecordHistory"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cam:ListGroups","cam:DescribeSubAccountContacts","cam:GetGroup"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeAlarms","cls:DescribeAlarmNotices","cls:GetAlarmLog","cls:DescribeAlertRecordHistory"],"resource": ["qcs::cls:ap-guangzhou:100007***827:alarm/alarm-xxx-9bbe-4625-ac29-b5e66bf643cf","qcs::cls:ap-guangzhou:100007***827:alarmNotice/notice-xxx-ec2c-410f-924f-4ee8a7cd028e"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeDataTransformPreviewDataInfo","cls:DescribeTopics","cls:DescribeIndex","cls:CreateDataTransform"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeFunctions","cls:CheckFunction","cls:DescribeDataTransformFailLogInfo","cls:DescribeDataTransformInfo","cls:DescribeDataTransformPreviewInfo","cls:DescribeDataTransformProcessInfo","cls:DeleteDataTransform","cls:ModifyDataTransform"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics"],"resource": ["*"]},{"effect": "allow","action": ["cls:DescribeDataTransformFailLogInfo","cls:DescribeDataTransformInfo","cls:DescribeDataTransformPreviewDataInfo","cls:DescribeDataTransformPreviewInfo","cls:DescribeDataTransformProcessInfo"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:CreateScheduledSql","cls:SearchLog","cls:DescribeScheduledSqlInfo","cls:DescribeScheduledSqlProcessInfo","cls:DeleteScheduledSql","cls:ModifyScheduledSql","cls:RetryScheduledSqlTask"],"resource": ["*"]},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:SearchLog","cls:DescribeScheduledSqlProcessInfo","cls:CreateScheduledSql","cls:DeleteScheduledSql","cls:ModifyScheduledSql","cls:RetryScheduledSqlTask"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cls:DescribeScheduledSqlInfo"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CreateConsumer","cls:ModifyConsumer","cls:DeleteConsumer","cls:DescribeConsumer","cls:DescribeConsumerPreview"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList","ckafka:DescribeInstances","ckafka:DescribeTopic","ckafka:DescribeInstanceAttributes","ckafka:CreateToken","ckafka:AuthorizeToken"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CreateConsumer","cls:ModifyConsumer","cls:DeleteConsumer","cls:DescribeConsumer","cls:DescribeConsumerPreview"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["age&13","name&vinson"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList","ckafka:DescribeInstances","ckafka:DescribeTopic","ckafka:DescribeInstanceAttributes","ckafka:CreateToken","ckafka:AuthorizeToken"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:DescribeConsumer","cls:DescribeConsumerPreview"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","ckafka:DescribeInstances","ckafka:DescribeTopic","ckafka:DescribeInstanceAttributes","ckafka:CreateToken","ckafka:AuthorizeToken"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:DescribeConsumer","cls:DescribeConsumerPreview"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","ckafka:DescribeInstances","ckafka:DescribeTopic","ckafka:DescribeInstanceAttributes","ckafka:CreateToken","ckafka:AuthorizeToken"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:DescribeIndex","cls:CreateShipper"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:ModifyShipper","cls:DescribeShippers","cls:DeleteShipper","cls:DescribeShipperTasks","cls:RetryShipperTask","cls:DescribeShipperPreview","cos:GetService","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:DescribeIndex","cls:CreateShipper"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:ModifyShipper","cls:DescribeShippers","cls:DeleteShipper","cls:DescribeShipperTasks","cls:RetryShipperTask","cls:DescribeShipperPreview","cos:GetService","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets" ],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:DescribeShippers","cls:DescribeShipperTasks","cls:RetryShipperTask","cls:DescribeShipperPreview","cam:ListAttachedRolePolicies"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:DescribeShippers","cls:DescribeShipperTasks","cls:RetryShipperTask","cls:DescribeShipperPreview","cam:ListAttachedRolePolicies"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CreateDlcDeliver","cls:ModifyDlcDeliver","cls:DescribeDlcDelivers","cls:DeleteDlcDeliver"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","dlc:DescribeDatabases","dlc:DescribeOptimizedTables","dlc:DescribeDatasourceConnection","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CreateDlcDeliver","cls:ModifyDlcDeliver","cls:DescribeDlcDelivers","cls:DeleteDlcDeliver"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","dlc:DescribeDatabases","dlc:DescribeOptimizedTables","dlc:DescribeDatasourceConnection","cam:ListAttachedRolePolicies","cam:AttachRolePolicy","cam:CreateRole","cam:DescribeRoleList"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets" ],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:DescribeDlcDelivers","dlc:DescribeDatabases","dlc:DescribeOptimizedTables","dlc:DescribeDatasourceConnection","cam:ListAttachedRolePolicies"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cls:DescribeDlcDelivers","dlc:DescribeDatabases","dlc:DescribeOptimizedTables","dlc:DescribeDatasourceConnection","cam:ListAttachedRolePolicies"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CheckSplunkConnect","cls:DescribeSplunkPreview","cls:CreateSplunkDeliver","cls:ModifySplunkDeliver","cls:DescribeSplunkDelivers","cls:DeleteSplunkDeliver"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CheckSplunkConnect","cls:DescribeSplunkPreview","cls:CreateSplunkDeliver","cls:ModifySplunkDeliver","cls:DescribeSplunkDelivers","cls:DeleteSplunkDeliver"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues",],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CheckSplunkConnect","cls:DescribeSplunkPreview","cls:DescribeSplunkDelivers"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets","cls:CheckSplunkConnect","cls:DescribeSplunkPreview","cls:DescribeSplunkDelivers"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues",],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cls:CreateDeliverFunction","cls:DeleteDeliverFunction","cls:ModifyDeliverFunction","cls:GetDeliverFunction","scf:ListFunctions","scf:ListAliases","scf:ListVersionByFunction"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cls:CreateDeliverFunction","cls:DeleteDeliverFunction","cls:ModifyDeliverFunction","cls:GetDeliverFunction","scf:ListFunctions","scf:ListAliases","scf:ListVersionByFunction"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*"},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cls:GetDeliverFunction","scf:ListFunctions","scf:ListAliases","scf:ListVersionByFunction"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeTopics","cls:DescribeLogsets"],"resource": "*","condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies","cls:GetDeliverFunction","scf:ListFunctions","scf:ListAliases","scf:ListVersionByFunction"],"resource": "*"}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeKafkaConsumer","cls:CloseKafkaConsumer","cls:ModifyKafkaConsumer","cls:OpenKafkaConsumer"],"resource": ["*"]},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeKafkaConsumer","cls:CloseKafkaConsumer","cls:ModifyKafkaConsumer","cls:OpenKafkaConsumer"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}},{"effect": "allow","action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies"],"resource": ["*"]}]}
{"statement": [{"action": ["cls:DescribeLogsets","cls:DescribeTopics","cls:DescribeKafkaConsumer","cls:CloseKafkaConsumer","cls:ModifyKafkaConsumer","cls:OpenKafkaConsumer"],"effect": "allow","resource": ["qcs::cls:ap-chengdu:100001127XXX:logset/axxxxxx-772e-4971-ad9a-ddcfcfff691b","qcs::cls:ap-chengdu:100001127XXX:topic/590xxxxxxx-36c4-447b-a84f-172ee7340b22"]},{"action": ["tag:DescribeResourceTagsByResourceIds","tag:DescribeTagKeys","tag:DescribeTagValues","cam:ListAttachedRolePolicies"],"effect": "allow","resource": ["*"]}],"version": "2.0"}
{"version": "2.0","statement": [{"action": ["cls:OpenKafkaConsumer"],"effect": "allow","resource": ["*"]}]}
{"statement": [{"action": ["cls:DescribeRemoteWriteTask","cls:DescribeTopics","cls:CreateRemoteWriteTask","cls:ModifyRemoteWriteTask","cls:DescribeLogsets","cls:DeleteRemoteWriteTask","cls:CheckRemoteWriteTaskConnect"],"effect": "allow","resource": ["*"]}],"version": "2.0"}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:DescribeRemoteWriteTask","cls:DescribeTopics","cls:CreateRemoteWriteTask","cls:ModifyRemoteWriteTask","cls:DescribeLogsets","cls:DeleteRemoteWriteTask","cls:CheckRemoteWriteTaskConnect"],"resource": ["*"],"condition": {"string_equal": {"qcs:resource_tag": "key:value"}}}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateConsumerGroup","cls:ModifyConsumerGroup","cls:DescribeConsumerGroups","cls:DeleteConsumerGroup","cls:DescribeConsumerOffsets","cls:CommitConsumerOffsets","cls:SendConsumerHeartbeat","cls:pullLog"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateConsole","cls:DeleteConsole","cls:DescribeConsoles","vpc:DescribeSubnetEx","vpc:DescribeVpcEx","cls:ModifyConsole"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateConsole","cls:DeleteConsole","cls:DescribeConsoles","vpc:DescribeSubnetEx","vpc:DescribeVpcEx","cls:ModifyConsole"],"resource": ["qcs::cls::uin/100******123:datasight/clsconsole-1234abcd"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:CreateConsole","cls:DeleteConsole","cls:DescribeConsoles","vpc:DescribeSubnetEx","vpc:DescribeVpcEx","cls:ModifyConsole"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
{"statement": [{"action": ["cls:DescribeConsoles"],"effect": "allow","resource": ["*"]}],"version": "2.0"}
{"statement": [{"action": ["cls:DescribeConsoles"],"effect": "allow","resource": ["qcs::cls::uin/100******123:datasight/clsconsole-1234abcd"]}],"version": "2.0"}
{"statement": [{"action": ["cls:DescribeConsoles"],"effect": "allow","resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}],"version": "2.0"}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:SearchLog","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:MetricsLabelValues","cls:MetricsQueryRange","cls:MetricsLabels","cls:MetricsQuery"],"resource": ["*"]}]}
{"version": "2.0","statement": [{"effect": "allow","action": ["cls:SearchLog","cls:MetricsSeries","cls:MetricsQueryExemplars","cls:MetricsLabelValues","cls:MetricsQueryRange","cls:MetricsLabels","cls:MetricsQuery"],"resource": ["*"],"condition": {"for_any_value:string_equal": {"qcs:resource_tag": ["key&value"]}}}]}
Last updated:2025-10-22 17:08:23
package com.tencentcloudapi.cls;import com.tencentcloudapi.cls.producer.errors.ProducerException;import org.junit.Test;public class AsyncProducerClientTest {@Testpublic void testAsyncProducerClient() throws ProducerException, InterruptedException {String endpoint = "ap-guangzhou.cls.tencentcs.com";// API key secretId, required.String secretId = "";// API key secretKey, required.String secretKey = "";// API token, required.String secretToken = "";// Log topic ID, required.String topicId = "";final AsyncProducerConfig config = new AsyncProducerConfig(endpoint, secretId, secretKey,NetworkUtils.getLocalMachineIP(), secretToken);// Build a client instance.final AsyncProducerClient client = new AsyncProducerClient(config);for (int i = 0; i < 10000; ++i) {List<LogItem> logItems = new ArrayList<>();int ts = (int) (System.currentTimeMillis() / 1000);LogItem logItem = new LogItem(ts);logItem.PushBack(new LogContent("__CONTENT__", "Hello, I am from Shenzhen.|hello world"));logItem.PushBack(new LogContent("city", "guangzhou"));logItem.PushBack(new LogContent("logNo", Integer.toString(i)));logItem.PushBack(new LogContent("__PKG_LOGID__", (String.valueOf(System.currentTimeMillis()))));logItems.add(logItem);client.putLogs(topicId, logItems, result -> System.out.println(result.toString()));}client.close();}}
Last updated:2024-01-20 17:14:28
10.20.20.10;[Tue Jan 22 14:49:45 CST 2019 +0800];GET /online/sample HTTP/1.1;127.0.0.1;200;647;35;http://127.0.0.1/
IP: 10.20.20.10time: [Tue Jan 22 14:49:45 CST 2019 +0800]request: GET /online/sample HTTP/1.1host: 127.0.0.1status: 200length: 647bytes: 35referer: http://127.0.0.1/
__CONTENT__ as a key and uses the original text as the value by default. The key-value pair is as follows:LogListener Parsing Mode | Key | Description |
Full text in a single line/Full text in multi lines | __CONTENT__ | __CONTENT__ is the key name by default. |
JSON format | Name in the JSON file | The name and value in the original text of the JSON log are used as a key-value pair. |
Separator format | Custom | After dividing fields using separators, you need to define a key name for each field group. |
Full regular expression | Custom | After extracting fields based on a regular expression, you need to define a key name for each field group. |
Collection Method | Description |
API | You can call CLS APIs to upload structured logs to CLS. For more information, see Uploading Log via API. |
SDK | |
LogListener client | LogListener is a log collection client provided by CLS. You can quickly access CLS by simply configuring LogListener in the console. For more information, see LogListener Use Process. |
Category | Collection via LogListener | Collection via API |
Code modification | Provides a non-intrusive collection method for applications, without code modification. | Reports logs only after modifying application code. |
Resumable upload | Supports resumable upload of logs. | Automatically implemented by code. |
Retransmission upon failure | Provides an inherent retry mechanism. | Automatically implemented by code. |
Local cache | Supports local cache, ensuring data integrity during peak hours. | Automatically implemented by code. |
Resource occupation | Occupies resources such as memory and CPU resources. | Occupies no additional resources. |
System Environment | Recommended Access Method |
Linux/Unix | |
Windows | |
iOS/Android/Web |
Last updated:2025-12-03 11:22:41

2018-01-01 10:00:01 start LogListener2018-01-01 10:00:02 echo log_1 >> cls.log2018-01-01 10:00:03 echo log_2 >> cls.log2018-01-01 10:00:04 echo log_3 >> cls.log2018-01-01 10:00:05 echo log_4 >> cls.log......
Last updated:2024-01-20 17:14:28

Last updated:2025-11-19 20:30:29
LogListener version | Processor Architecture | Operating System Category | Supported Installation Environment |
v2.x.x | x64/ARM | TencentOS Server | TencentOS Server 3.1,TencentOS Server 2.4 |
| | CentOS (64-bit) | CentOS_6.8_64-bit, CentOS_6.9_64-bit, CentOS_7.2_64-bit, CentOS_7.3_64-bit, CentOS_7.4_64-bit, CentOS_7.5_64-bit, CentOS_7.6_64-bit, CentOS_8.0_64-bit |
| | Ubuntu (64-bit) | Ubuntu Server_14.04.1_LTS_64-bit, Ubuntu Server_16.04.1_LTS_64-bit, Ubuntu Server_18.04.1_LTS_64-bit, Ubuntu Server_20.04.1_LTS_64-bit, Ubuntu Server_22.04.1_LTS_64-bit |
| | Debian (64-bit) | Debian_8.2_64-bit, Debian_9.0_64-bit, Debian_12.0_64-bit |
| | openSUSE (64-bit) | openSUSE_42.3_64-bit |
/usr/local/ in this example). Then, go to the LogListener directory /usr/local/loglistener/tools and run the installation command.loglistener-linux-x64 by default. To install a specific version, specify the version number, for example, replace loglistener-linux-x64 with loglistener-linux-x64-2.8.0 to install the 2.8.0 version.wget http://mirrors.tencent.com/install/cls/loglistener-linux-x64.tar.gz && tar zxvf loglistener-linux-x64.tar.gz -C /usr/local/ && cd /usr/local/loglistener/tools && ./loglistener.sh install
wget http://mirrors.tencentyun.com/install/cls/loglistener-linux-x64.tar.gz && tar zxvf loglistener-linux-x64.tar.gz -C /usr/local/ && cd /usr/local/loglistener/tools && ./loglistener.sh install
/usr/local/ installation path, go to the /usr/local/loglistener/tools path and run the following command to initialize LogListener as the root user (by default, the private network is used to access the service):./loglistener.sh init -secretid AKID******************************** -secretkey ******************************** -region ap-xxxxxx
Parameter Name | Required | Type Description |
secretid | Yes | Part of the Cloud API Key, SecretId is used to identify the API caller. Ensure that the account associated with the Cloud API key has the appropriate LogListener log collection permission. |
secretkey | Yes | Part of the Cloud API Key SecretKey is used to encrypt signature strings and is the server-side verification key for signature strings. Please ensure the associated account of the Cloud API Key has appropriate LogListener Log Collection Permissions. |
encryption | No | Whether to encrypt and store the Cloud API Key. To encrypt the key, set the parameter to true; if encryption of the key is not required, set it to false. For details, see Key Encryption Storage. |
network | No | It indicates how LogListener accesses the CLS service. Values: intra for private network access(default) and internet for public network access. Private network access: Applicable to Tencent Cloud servers located in the same region as the machine group. Public network access: Applicable to non-Tencent Cloud servers or to servers located in regions that do not match those of the machine group. |
region | If domain is configured, this parameter is not required. Otherwise, it is required. | region indicates the region where the CLS is deployed. Enter the appropriate domain name abbreviation, such as ap-beijing or ap-guangzhou. Note: When the CLS location is inconsistent with your business machine location, configure the network parameter as internet to enable access over public network. |
domain | Yes. (Unless region is configured) | The domain name representing the CLS region. For example, ap-beijing.cls.tencentyun.com or ap-guangzhou.cls.tencentyun.com. Note: When the CLS service area used by your business machine is inconsistent with its region, configure a public network domain name, such as ap-beijing.cls.tencentcs.com. |
ip | No | The IP address of the machine that can be associated with the machine group using the configured IP address. For details, see Machine Group. If not specified, LogListener will automatically obtain the local IP address. |
label | No | Machine ID. Once entered, the machine will be associated with the machine group also having the filled machine identification. For details, see Machine Group. Multiple identifiers separated by commas. Note: If a machine label is configured, the machine can only be associated with the machine group using the machine label instead of the IP address; if not configured, the machine group can only be associated with the machine using the IP address. |

internet explicitly:./loglistener.sh init -secretid AKID******************************** -secretkey ******************************** -region ap-xxxxxx -network internet

region indicates the region of the CLS you use, instead of the region where your business machine resides.systemctl start loglistenerd
/etc/init.d/loglistenerd start

/etc/init.d/loglistenerd -v
/etc/init.d/loglistenerd -h
systemctl (start|restart|stop) loglistenerd # Start, restart, stop
/etc/init.d/loglistenerd (start|restart|stop) # Start, restart, stop
/etc/init.d/loglistenerd status

/etc/init.d/loglistenerd check

/usr/local/ installation path, go to the /usr/local/loglistener/tools path and run the uninstallation command as the admin:./loglistener.sh uninstall
loglistener/data) on the earlier version; for example, back up the legacy breakpoint file to the /tmp/loglistener-backup directory.cp -r loglistener-2.2.3/data /tmp/loglistener-backup/
cp -r /tmp/loglistener-backup/data loglistener-<version>/
<version> as required. The following is an example:cp -r /tmp/loglistener-backup/data loglistener-2.8.2/
Last updated:2025-11-13 16:55:55
LogListener Versions | Operating System Category | Supported Installation Environment |
v2.8.9 or later | Windows Server | Windows Server 2012 R2, Windows Server 2016, Windows Server 2019, and Windows Server 2022 |
.\loglistener_installer.exe install --secret_id AKID******************************** --secret_key whHwQfjdLnzzCE1jIf09xxxxxxxxxxxx --region ap-xxxxxx
C:\Program Files (x86)\Tencent\LogListener directory.Parameter Name | Required | Type Description |
secret_id | Yes | Part of the Cloud API Key, which is used to identify the API caller. Ensure that the account associated with the Cloud API key has the appropriate LogListener log collection permission. |
secret_key | Yes | Part of the Cloud API Key, which is used as a key to encrypt the signature string and verify it on the server. Ensure that the account associated with the Cloud API Key has the appropriate LogListener log collection permission. |
network | No | It indicates how LogListener accesses the CLS service. Values: intra for private network access and internet for public network access. Private network access: Applicable to Tencent Cloud servers located in the same region as the machine group. Public network access: Applicable to non-Tencent Cloud servers or to servers located in regions that do not match those of the machine group. |
region | Yes. (Unless domain name is configured.) | Region indicates the region where the CLS is deployed. Enter the appropriate domain name abbreviation, such as ap-beijing or ap-guangzhou. |
endpoint | Yes. (Unless region is configured) | Domain name indicates the domain name for the region where the CLS is deployed, such as ap-beijing.cls.tencentcloud.com and ap-guangzhou.cls.tencentcloud.com. |
ip | No | It indicates the IP address of the machine that can be associated with the machine group using the configured IP address. For details, see Machine Group. If left blank, LogListener will automatically obtain the local IP address. |
label | No | It indicates a machine label. Once entered, the machine will be associated with the corresponding machine group that shares this label. For details, see Machine Group. You can configure multiple labels by separating them using commas. Note: If a machine label is configured, the machine can only be associated with the machine group using the machine label instead of the IP address; if not configured, the machine group can only be associated with the machine using the IP address. |

C:\Program Files (x86)\Tencent\LogListener as an example, open Windows PowerShell as an administrator and run the following command in the installation path to check the LogListener edition:.\loglistener_work.exe -v
C:\Program Files (x86)\Tencent\LogListener as an example, open Windows PowerShell as an administrator and run the following command in the installation path to stop LogListener:.\loglistener_daemon.exe -action stop
C:\Program Files (x86)\Tencent\LogListener as an example, open Windows PowerShell as an administrator and run the following command in the installation path to restart LogListener:.\loglistener_daemon.exe -action restart
C:\Program Files (x86)\Tencent\LogListener as an example, open Windows PowerShell as an administrator and run the following command in the installation path to check the heartbeat and configuration of LogListener:.\loglistener_work.exe check

.\loglistener_installer.exe uninstall


Last updated:2024-01-20 17:14:28
accesskey, ID, and region configuration).SecretId information (SecretId and SecretKey), and set Machine label in Advanced Settings as needed.SecretId and SecretKey for uploading logs. They can be obtained as instructed in Viewing Acquisition Method.Last updated:2024-01-20 17:14:28
wget https://mirrors.tencent.com/install/cls/k8s/tencentcloud-cls-k8s-install.sh
bash +x tencentcloud-cls-k8s-install.sh
./tencentcloud-cls-k8s-install.sh --region ap-guangzhou --secretid xxx --secretkey xxx
Parameter | Description |
secretid | Tencent Cloud account access ID |
secretkey | Tencent Cloud account access key |
region | CLS region |
docker_root | The root directory of the cluster Docker. The default value is `/var/lib/docker`. If the actual directory is different from the default one, specify the root directory of Docker. |
cluster_id | Cluster ID. If it is not specified, a default ID will be generated during installation (we recommend that you specify a cluster ID, as the generated default ID is less readable). |
network | Private network or public network (default). |
api_network | Private network or internet (default) for TencentCloud API. |
api_region |
./tencentcloud-cls-k8s-install.sh --secretid xxx --secretkey xx --region ap-guangzhou --network internet --api_region ap-guangzhou
tencent-cloud-cls-log Helm package.helm list -n kube-system
kubectl get pods -o wide -n kube-system | grep tke-log-agent
kubectl get pods -o wide -n kube-system | grep cls-provisioner
tke-log-agent collection Pod and a cls-provisioner Pod will start on each host.tke-log-agent.kubectl edit ds tke-log-agent -n kube-system

Variable | Description |
MAX_CONNECTION | Maximum number of connections, which is `10` by default. |
CHECKPOINT_WINDOW_SIZE | The checkpoint window size of a file, which is `1024` by default. |
MAX_FILE_BREAKPOINTS | Breakpoint file size, which is `N*2k`. `N` defaults to `8k`. |
MAX_SENDRATE | Maximum sending rate (bytes/s), which is not limited by default. |
MAX_FILE | Maximum number of monitored files, which is `15000` by default. |
MAX_DIR | Maximum number of monitored directories, which is `5000` by default. |
MAX_HTTPS_CONNECTION | Maximum number of HTTPS connections, which is `100` by default. |
CONCURRENCY_TASKS | LogListener task pool, which is `256` by default (supported by v3.x or later). |
PROCESS_TASKS_EVERY_LOOP | Number of tasks processed every loop, which is `4` by default. |
CPU_USAGE_THRES | LogListener memory usage threshold, which is not limited by default. |
wget http://mirrors.tencent.com/install/cls/k8s/upgrade/upgrade.sh
chmod +x upgrade.sh
./upgrade.sh
wget http://mirrors.tencent.com/install/cls/k8s/upgrade/upgrade-1.13.sh
chmod +x upgrade-1.13.sh
./upgrade-1.13.sh
helm uninstall tencent-cloud-cls-log -n kube-system
kubectl delete secret -n kube-system cls-k8s
Last updated:2025-11-13 16:49:51
wget https://mirrors.tencentyun.com/install/cls/script/loglistener/loglistener_operator && chmod u+x loglistener_operator
wget https://mirrors.tencent.com/install/cls/script/loglistener/loglistener_operator && chmod u+x loglistener_operator
./loglistener_operator install -s ${secret_id} -k ${secret_key} -r ${region}
./loglistener_operator install -s ${secret_id} -k ${secret_key} -r ${region} --version ${version}
./loglistener_operator install -s ${secret_id} -k ${secret_key} -r ${region} --package_path ${package_path}
./loglistener_operator install -s ${secret_id} -k ${secret_key} -r ${region} --url https://xxx.tar.gz

Parameter Name | Required or Not | Description |
-s | Yes | Part of the Cloud API Key, which is used to identify the API caller. Ensure that the account associated with the Cloud API key has the appropriate LogListener log collection permission. |
-k | Yes | Part of the Cloud API Key, which is used to encrypt signature strings and the server-side verification key for signature strings. Please ensure the associated account of the cloud API key has appropriate LogListener Log Collection Permissions. |
-n | No | Indicates the method LogListener uses to access the service domain. Valid values: internal (private network access, default), internet (public network access). Private network access: Applicable to Tencent Cloud servers located in the same region as the machine group. Public network access: Applicable to non-Tencent Cloud servers or to servers located in regions that do not match those of the machine group. |
-r | Yes | region indicates the region where the CLS is deployed. Enter the appropriate domain name abbreviation, such as ap-beijing or ap-guangzhou. Note: Note: When the CLS region is inconsistent with your business machine's region, configure the parameter network as internet to represent public network access. |
-d | No | The domain name representing the CLS region. For example, ap-beijing.cls.tencentyun.com, ap-guangzhou.cls.tencentyun.com. Note: When the CLS service area is inconsistent with your business machine region, configure the public network domain name. For example, ap-beijing.cls.tencentcs.com. |
-i | No | The IP address of the machine. The machine group can be associated with the machine using the configured IP address. For details, see Machine Group. If not specified, LogListener will automatically obtain the local IP address. |
-l | No | Machine ID. Once entered, the machine will be associated with the machine group also having the filled machine identification. For details, see Machine Group. Multiple identifiers separated by commas. Note: If a machine label is configured, the machine can only be associated with the machine group using the machine label instead of the IP address; if not configured, the machine group can only be associated with the machine using the IP address. |
-p | No | Port, default 80. |
-u | No | Do not upload machine identification to CLS by default. |
--base_dir | No | LogListener installation path, default installation under the /opt directory. |
--package_path | No | Specify the local package path when installing with a local package. |
--url | No | Specify URL during installation, specify mirrors domain names or IP addresses. |
--version | No | Install specified version number, default to latest version. |
./loglistener_operator install --help.
systemctl start loglistener
systemctl check loglistener to check if startup is successful.
/opt/loglistener), run the following command to view the version../loglistener -v
/opt/loglistener), run the following command to view help../loglistener -h
systemctl stop loglistener
systemctl restart loglistener
systemctl check loglistener to check whether the restart is successful.systemctl status loglistener

systemctl check loglistener
systemctl stop loglistener to stop running the previous version of LogListener.systemctl stop loglistene
/opt/loglistener), run the uninstallation command with administrator privileges in the path /opt/loglistener/tools:./loglistener_operator uninstall
systemctl stop loglistener to stop running the previous version of LogListener./opt/loglistener as an example, go to the installation directory and backup the checkpoint file directory ./data in the old version. For example, backup the old version of checkpoint file to /tmp/loglistener-backup.cp -r ./data /tmp/loglistener-backup/
./loglistener_operator uninstall to uninstall the old version of LogListener./opt/loglistener, copy the backup checkpoint file directory (procedure 2) to the new version of LogListener directory.cp -r /tmp/loglistener-backup/data ./
systemctl start loglistener to start running the new version of LogListener.systemctl stop loglistener to stop running the previous version of LogListener. ./loglistener_operator uninstall to uninstall the old version of LogListener.systemctl start loglistener to start running the new version of LogListener.Last updated:2025-11-19 19:49:42

python -V
wget http://mirrors.tencentyun.com/install/cls/agent-update.py
/usr/bin/python2.7 agent-update.py http://mirrors.tencentyun.com/install/cls/loglistener-linux-x64-x.tar.gz
x in loglistener-linux-x64-x.tar.gz represents the version number of LogListener to upgrade to (such as 2.7.2). The latest version of LogListener is as displayed in LogListener Installation Guide. If the entered version does not exist, the download will fail. If the version entered is earlier than the current version installed on the machine, the upgrade will not take effect.Last updated:2024-01-20 17:14:28
Default Configuration Item | Description |
Log Topic | When you enable LogListener service logs, the logset cls_service_logging will be created for you automatically, and all log data generated by associated machine groups will be categorized and stored in corresponding log topics. The following three log topics are created by default: loglistener_status: the heartbeat status logs for the corresponding LogListener.loglistener_alarm: logs corresponding to LogListener's monitoring by collection metric/error type . loglistener_business: logs corresponding to LogListener's collection operations, with each log corresponds to one request. |
Region | After the LogListener log services are enabled, logsets and log topics will be created under the machine groups in the same region of LogListener. |
Log Storage Duration | The default storage duration is 7 days, and the value cannot be modified. |
Index | Full-text index and key-value index are enabled for all collected log data by default. You can modify the index configuration. For details, please see Configuring Index. |
Dashboard | Dashboard service_log_dashboard will be created in the same region of LogListener by default. |
cls_service_logging is a unified logset for LogListener service logs.service_log_dashboard dashboard.app1 application logs located in /var/log/app1/. You can get the statistics of logs collected under this path.

cls_service_logging will not be deleted automatically. You can manually delete the logset where the service logs are saved.service_log_dashboard by the type of recorded logs to display LogListener’s collection and monitoring statistics.loglistener_status are detailed as follows:Parameter | Description |
InstanceId | LogListener unique identifier |
IP | Machine group IP |
Label | An array of machine IDs |
Version | Version number |
MemoryUsed | Memory utilization of LogListener |
MemMax | Memory utilization threshold on this machine set by the Agent |
CpuUsage | LogListener CPU utilization |
Status | LogListener running status |
TotalSendLogSize | Size of logs sent |
SendSuccessLogSize | Size of successfully sent logs |
SendFailureLogSize | Size of sending-failed logs |
SendTimeoutLogSize | Size of logs with sending timed out |
TotalParseLogCount | Total number of logs parsed |
ParseFailureLogCount | Number of parsing-failed logs |
TotalSendLogCount | Number of logs sent |
SendSuccessLogCount | Number of successful sent logs |
SendFailureLogCount | Number of sending-failed logs |
SendTimeoutLogCount | Number of logs with sending timed out |
TotalSendReqs | Total number of requests sent |
SendSuccessReqs | Number of successful sent requests |
SendFailureReqs | The number of sending-failed requests |
SendTimeoutReqs | Number of requests with sending timed out |
TotalFinishRsps | Total number of RSP files received |
TotalSuccessFromStart | Total number of successfully sent requests since LogListener was enabled |
AvgReqSize | Average request packet size |
SendAvgCost | Average sending time |
AvailConnNum | Number of available connections |
QueueSize | The size of queued requests |
loglistener_alarm are detailed as follows:Monitoring Metric | Description |
InstanceId | LogListener unique identifier |
Label | An array of machine IDs |
IP | Machine group IP |
Version | LogListener version |
AlarmType.count | Statistics of alarm types |
AlarmType.example | Sample alarm type |
alarm type | type ID | Description |
UnknownError | 0 | Initializing the alarm type. |
UnknownError | 1 | Failed to parse. |
CredInvalid | 2 | Failed to verify. |
SendFailure | 3 | Failed to send. |
RunException | 4 | Abnormal LogListener running. |
MemLimited | 5 | Reached the memory utilization threshold. |
FileProcException | 6 | Exceptions occurred in file processing. |
FilePosGetError | 7 | Failed to get the file publishing info. |
HostIpException | 8 | Exceptions occurred in the server IP thread. |
StatException | 9 | Failed to get the process info. |
UpdateException | 10 | Exceptions occurred in the CLS modification feature. |
DoSendError | 11 | Failed to confirm sending. |
FileAddError | 12 | Failed to create the file. |
FileMetaError | 13 | Failed to create the metadata file. |
FileOpenError | 14 | Failed to open the file. |
FileOpenError | 15 | Failed to read the file. |
FileStatError | 16 | Failed to get the file status. |
getTimeError | 17 | Failed to get the time from the log content. |
HandleEventError | 18 | Exceptions occurred in processing the file. |
handleFileCreateError | 19 | Exceptions occurred in handleFileCreateEvent(). |
LineParseError | 20 | Failed to parse the log directory. |
Lz4CompressError | 21 | Failed to compress. |
readEventException | 22 | Failed to read. |
ReadFileBugOn | 23 | A bug exists. |
ReadFileException | 24 | Exceptions occurred in the read file. |
ReadFileInodeChange | 25 | File node changed. |
ReadFileTruncate | 26 | The read file is truncated. |
WildCardPathException | 27 | Exceptions occurred in addWildcardPathInotify(). |
loglistener_business are detailed as follows:Parameter | Description |
InstanceId | LogListener unique identifier |
Label | An array of machine IDs |
IP | Machine group IP |
Version | LogListener version |
TopicId | The target topic of the collected file |
FileName | File path name |
FileName | Actual file path |
FileInode | File node |
FileSize | File size |
LastReadTime | The most recent read time of the file |
ParseFailLines | Number of parsing-failed logs within a time window |
ParseFailSize | Size of parsing-failed logs within a time window |
ParseSuccessLines | Number of logs successful parsed within a time window |
ParseSuccessSize | Size of logs successful parsed within a time window |
ReadOffset | Offset of file reading in bytes |
TruncateSize | Size of truncated log files within a time window |
ReadAvgDelay | Average time delay for reads within a time window |
TimeFormatFailuresLines | Number of timestamp matching errors within a time window |
SendSuccessSize | Size of logs successful sent within a time window |
SendSuccessCount | Number of logs successful sent within a time window |
SendFailureSize | Size of sending-failed logs within a time window |
SendFailureCount | Number of sending-failed logs within a time window |
SendTimeoutSize | Size of logs with sending timed out within a time window |
SendTimeoutCount | Number of logs with sending timed out within a time window |
DroppedLogSize | Size of dropped logs within a time window |
DroppedLogCount | Number of dropped logs in a time window |
ProcessBlock | Whether the current file has triggered collection blocking in a statistical period (collection blocking will be triggered if the sliding window of a file has not moved for 10 minutes) |
Last updated:2025-07-29 11:46:30


Configuration Item | Description |
Alarm Metric | You can select key metrics of the corresponding cloud product as alarm metrics. |
Statistical Granularity | Time interval for collecting and analyzing monitoring data. |
Threshold | Metric-based alarms support two types of thresholds: static and dynamic. Static thresholds include fixed static thresholds and period-over-period static thresholds. You can select the comparison relationship and threshold value based on your business needs. When you configure metric-based alarms, the static threshold is selected by default. Dynamic thresholds are suitable for scenarios where the business system exhibits clear periodic fluctuations or sudden spikes and drops in data. |
Alarm Level | When the alarm level feature is enabled, you can configure alarms at three levels: Serious, Warn, and Note. This feature is currently supported only for Cloud Product Monitoring and Application Performance Management (APM). |
Continuous Monitoring Data Points | Specify the number of continuous monitoring data points that should meet the condition before an alarm is triggered. |
Alarm Frequency | When an alarm is triggered, you can define how frequently notifications are sent. Notification frequency options include specify frequency for repeated notifications and exponentially increasing notifications by cycle. Specify frequency for repeated notifications: If the alarm is not cleared within 24 hours, the system will send notifications at the specified frequency, such as every 1 hour or every 2 hours. If the alarm remains uncleared after 24 hours, notifications will be sent once per day. (Once the alarm is cleared, the notification cycle will reset.) Note: If the notification frequency is configured as "only alarm once", a notification will be sent only when the alarm is first triggered and again when it is cleared during its lifecycle. Exponentially increasing notifications by cycle: Based on a fixed 5-minute base interval, alarm notifications are sent at exponentially increasing time intervals (first interval, second interval, third interval, and so on). The interval between notifications becomes progressively longer, helping to reduce repeated alarms and minimize unnecessary disturbances. |
Triggering Conditions | When multiple alarm trigger conditions are configured, they can be evaluated based on any, all, or composite logic. The triggering conditions are as follows. Any: The alarm is triggered when any one of the configured conditions reaches its threshold. All: The alarm is triggered only when all configured conditions reach their thresholds. Composite: The alarm is triggered when composite alarm conditions are met. Composite rules support logical expressions using AND and OR operators. |



Last updated:2024-01-20 17:14:28
Last updated:2024-01-20 17:14:28
Version | Change Type | Description |
v2.7.9 | Experience optimization | Added LogListener file lock verification, so only one agent instance can be started by default. Fixed the empty row processing exception in `containerd stdout`. Fixed full disk and business exceptions caused by file handle leaks. Fixed the failure in parsing the second half of the log content when there were too many lines of logs. |
v2.7.8 | Experience optimization | Fixed the issue where logs didn't have tag metadata due to metadata file generation delay in container scenarios. |
v2.7.7 | Experience optimization | Fixed the issue where the collection program's network connection couldn't be reconnected after a DNS exception was fixed. |
v2.7.6 | Experience optimization | Optimized the line break processing during `hostname` extraction. |
v2.7.5 | Experience optimization | Fixed the processing exception in file rotation when the actual file and soft link in the same directory were collected at the same time with different collection configurations. |
v2.7.4 | New feature | Supported collecting `hostname` as the metadata. Added `meta_processor` for combined parsing and supported parsing custom metadata (path). |
| Experience optimization | Fixed the missing collection problem in file deletion scenarios. Fixed the issue where a file was collected repeatedly as the file size calculated by the system was incorrect due to the lace of a line break at the end of the file. |
v2.7.3 | New feature | Supported log upload from multiple endpoints by a single agent instance. |
v2.7.2 | Experience optimization | Fixed the issue where the memory leaked as the corresponding configuration cache couldn't be cleared when a rotation file was removed. |
v2.7.1 | Experience optimization | Fixed the issue where a large number of empty service logs were printed. |
v2.7.0 | Experience optimization | Fixed the issue where collection was blocked due to possible exceptions when an empty string was uploaded. |
v2.6.9 | Experience optimization | Fixed the issue where excessive invalid logs were printed when multi-line log parsing failed. |
v2.6.8 | Experience optimization | Added a limit on the LogListener collection specification, so the protection mechanism will be enabled after the limit is exceeded. Fixed the Ubuntu startup failure. Optimized the blocklist feature to reduce the memory usage. Optimized the combined parsing mode and fixed processing exceptions when the root processor was a regular expression parsing plugin. Optimized the printing of certain logs. |
v2.6.7 | New feature | Supported the multi-tenancy collection capabilities under a single agent. |
v2.6.6 | Experience optimization | Fixed the issue where files with a small amount of written data might be missing or delayed during collection in soft link scenarios. |
v2.6.5 | New feature | Supported parsing the time zone information in the log time. |
| Experience optimization | Fixed the empty pointer processing exception in advanced data processing. Fixed the exception when multiple files were rotated at the same time. |
v2.6.4 | New feature | Supported customizing log parsing rules through a plugin. |
| Experience optimization | Optimized the log parsing format pipeline. Fixed the exception of parsing the millisecond timestamp (`%F`). |
v2.6.3 | Experience optimization | Fixed the issue where LogListener couldn't be started if the checkpoint file is corrupted. Fixed the issue where the blocklist didn't take effect for new files in special scenarios. |
v2.6.2 | New feature | Added support for incremental collection. |
| Experience optimization | Optimized the issue where collection is ignored in the period from file scanning to processing. Optimized abnormal overriding during automatic upgrade. |
v2.6.1 | Experience optimization | Optimized the issue where backtracking collection may occur during log rotation in some scenarios. Adjusted the timeout duration for log upload on the collection end to avoid data duplication caused by timeout. |
v2.6.0 | New feature | Added support for CVM batch deployment. Added support for ciphertext storage of secret IDs/KEYs. |
| Experience optimization | Optimized the LogListener installation and stop logic. Optimized the retry policy upon upload failures. Added a tool for detecting and rectifying dead locks caused by Glibc libraries of earlier versions. Optimized collection performance. |
v2.5.9 | Experience optimization | Optimized the resource limit policy. |
v2.5.8 | Experience optimization | Fixed the issue that removing a directory soft link affects the collection of other directory soft links that point to the same target. Fixed the issue that files in a directory cannot be collected if a soft link of the directory is removed and the same soft link is created again. |
v2.5.7 | Experience optimization | Fixed the (new) issue that logs will be collected again when the log file size is greater than 2 GB. Fixed the issue where renaming too many files will cause the program to malfunction. Fixed the issue where specified fields cannot be updated under log collection monitoring. |
v2.5.6 | Experience optimization | Optimized the issue that under specific use cases, the collection program cannot be triggered. |
v2.5.5 | Experience optimization | Optimized metadata checkpoints for collection to guarantee no data will lose due to restart. Supports resource limit configuration and overrun handling for memory, CPU, and bandwidth. |
v2.5.4 | New feature | Added support for log collection monitoring. |
| Experience optimization | Enhanced memory overrun handling: LogListener will be automatically loaded when memory overrun lasts for a period of time. |
v2.5.3 | Experience optimization | Optimized LogListener exceptions caused by memory issues. |
v2.5.2 | New feature | Added support for uploading parsing-failed logs. |
| Experience optimization | Optimized the blocklist feature. Now, the blocklist FILE mode supports wildcard filtering. |
v2.5.1 | Experience optimization | Enhanced the handling when breakpoint metadata could not be found in the collection file. |
v2.5.0 | New feature | Added support for automatic LogListener upgrade. Added support for automatic LogListener start in Ubuntu operating system. |
v2.4.6 | Experience optimization | Cleared residual configuration data in the cache after the collection configuration was changed. Optimized the issue where file collection with a soft link pointing to the `realpath` file was affected when an `IN_DELETE` event that deleted the soft link was being processed. Optimized the feature of collecting the same source file via the file's soft link and the directory's soft link at the same time. |
v2.4.5 | New feature | Added support for `multiline_fullregex_log` log collection. |
v2.4.4 | Experience optimization | Optimized the issue of inaccurate log time caused by the msec feature. |
v2.4.3 | New feature | Added support for automatically checking the log format (logFormat). |
v2.4.2 | Experience optimization | Optimized the issue of cache eviction during configuration pulling in Tencent Cloud container scenarios. |
v2.4.1 | New feature | Added support for collecting logs in milliseconds. |
| Experience optimization | Optimized exceptions due to no line break data in user logs. |
v2.4.0 | New feature | Added support for instance-level process monitoring by LogListener. |
v2.3.9 | New feature | Added support for blocklisting collection paths. |
| Experience optimization | Optimized the memory leak issue due to outdated Boost library. |
v2.3.8 | New feature | Added support for multi-path log collection. |
v2.3.6 | Experience optimization | Fixed the issue where collection stopped due to invalid key value. Fixed the memory leak issue due to request failures with the error code 502 returned. |
v2.3.5 | New feature | Added support for log context search. |
| Experience optimization | Fixed the issue where log collection stopped when logs were uploaded but authentication failed in the static configuration mode. Fixed the issue where dynamic configurations were no longer read after the memory exceeded the threshold in the dynamic configuration mode. Fixed the issue where sometimes log collection repeated when the log production speed was too high during log rotation. Fixed the memory leak issue caused by multiple failures to upload logs. |
v2.3.1 | Experience optimization | Optimized memory limit. When the memory limit was reached, requests lasting over 3s were considered as timed out. |
v2.2.6 | New feature | Added support for configuring private domain names and public domain names separately. |
| Experience optimization | Fixed LogListener exceptions caused by `getip`. |
v2.2.5 | New feature | Added support for Tencent Cloud COC environment deployment. |
| Experience optimization | Fixed the core issue caused by `getip`. |
v2.2.4 | Experience optimization | Changed the commands for installation and initialization to the subcommands `install` and `init` of `tools/loglistener.sh` respectively. Changed the command for restart to `/etc/init.d/loglistenerd start|stop|restart`. |
v2.2.3 | Experience optimization | Renaming or creating logs during log rotation will not cause log loss. |
v2.2.2 | Experience optimization | A log greater than 512 KB will be automatically truncated. |
Earlier versions | - | v2.2.2 added support for collection by full regular expression. v2.1.4 added support for full text in multi lines. v2.1.1 added support for log structuring. |
Last updated:2025-04-18 16:19:02
-encryption parameter. To enable key encryption, set the input parameter to true; if encryption is not required, set the input parameter to false../bin/encrypt_tool -e {Key ID} to obtain the encrypted key ID../bin/encrypt_tool -e {key} to obtain the encrypted key.vim ./etc/loglistener.conf under the LogListener installation path to open the configuration file. Update the secret_id and secret_key fields with the encrypted key ID and key obtained in steps 2 and 3. Finally, set encryption to true.
systemctl restart loglistenerd
/etc/init.d/loglistenerd restart
vim ./etc/loglistener.conf to replace secret_id and secret_key in the conf file with plaintext key ID and key, and set encryption to false.systemctl restart loglistenerd
/etc/init.d/loglistenerd restart
Last updated:2024-01-20 17:14:28
\n to mark the end of a log. For easier structural management, a default key value __CONTENT__ is given to each log, but the log data itself will no longer be structured, nor will the log field be extracted. The time attribute of a log is determined by the collection time.Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
test_full as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.__CONTENT__ as the key name of a log. Assume that a sample log is Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64, and you want to collect all logs on Jan 22, then enter __CONTENT__ in Key and Tue Jan 22.* in Filter Rule.@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
\n cannot be used to mark the end of a log. To help CLS distinguish between logs, a first-line regular expression is used for matching. When a line of a log matches the preset regular expression, it is considered as the beginning of the log, and the log ends before the next matching line.__CONTENT__ is also set, but the log data itself is not structured, and no log fields are extracted. The time attribute of a log is determined by the collection time.10.20.20.10 - - [Tue Jan 22 14:24:03 CST 2019 +0800] GET /online/sample HTTP/1.1 127.0.0.1 200 628 35 http://127.0.0.1/group/1Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0 0.310 0.310
__CONTENT__:10.20.20.10 - - [Tue Jan 22 14:24:03 CST 2019 +0800] GET /online/sample HTTP/1.1 127.0.0.1 200 628 35 http://127.0.0.1/group/1 \nMozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0 0.310 0.310
test-mtext as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.__CONTENT__ is used as the key name of a log by default. For example, below is a sample log with full text in multi lines:10.20.20.10 - - [Tue Jan 22 14:24:03 CST 2019 +0800] GET /online/sample HTTP/1.1 127.0.0.1 200 628 35 http://127.0.0.1/group/1Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0 0.310 0.310
10.20.20.10, enter __CONTENT__ in Key and 10.20.20.10.* in Filter Rule.Key value for parsing failures (which is LogParseFailure by default). All parsing-failed logs are uploaded with the input content as the key name (Key) and the raw log content as the key value (Value).@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
(\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*
() capture groups. You can specify the key name of each group.body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
test-whole as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters \* and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.



10/Dec/2017:08:00:00.000 is %d/%b/%Y:%H:%M:%S.%f.2017-12-10 08:00:00.000 is %Y-%m-%d %H:%M:%S.%f.12/10/2017, 08:00:00.000 is %m/%d/%Y, %H:%M:%S.%f.status field with the value 400 or 500 after the sample log is parsed in full regular expression mode, you need to configure key as status and the filter rule as 400|500.Key value for parsing failures (which is LogParseFailure by default). All parsing-failed logs are uploaded with the input content as the key name (Key) and the raw log content as the key value (Value).@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.*
\[(\d+-\d+-\w+:\d+:\d+,\d+)\]\s\[(\w+)\]\s(.*)
() capture groups. You can specify the key name of each group.time: 2018-10-01T10:30:01,000`level: INFO`msg: java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
test-multi as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.


10/Dec/2017:08:00:00 is %d/%b/%Y:%H:%M:%S.
Example 2: The parsing format of the original timestamp `2017-12-10 08:00:00` is %Y-%m-%d %H:%M:%S.
Example 3: The parsing format of the original timestamp 12/10/2017, 08:00:00 is %m/%d/%Y, %H:%M:%S.status field with the value 400 or 500 after the sample log is parsed in full regular expression mode, you need to configure key as status and the filter rule as 400|500.Key value for parsing failures (which is LogParseFailure by default). All parsing-failed logs are uploaded with the input content as the key name (Key) and the raw log content as the key value (Value).@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
\n.{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
test-json as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.10/Dec/2017:08:00:00 is %d/%b/%Y:%H:%M:%S.
Example 2: The parsing format of the original timestamp `2017-12-10 08:00:00` is %Y-%m-%d %H:%M:%S.
Example 3: The parsing format of the original timestamp 12/10/2017, 08:00:00 is %m/%d/%Y, %H:%M:%S.response_code field with the value 400 or 500 from the original JSON log file, you need to configure key as response_code and the filter rule as 400|500.Key value for parsing failures (which is LogParseFailure by default). All parsing-failed logs are uploaded with the input content as the key name (Key) and the raw log content as the key value (Value).

Last updated:2024-01-20 17:14:28
\n. When CLS processes separator logs, you need to define a unique key for each separate field.10.20.20.10 - ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
:::, the log will be segmented into eight fields, and a unique key will be defined for each of them.IP: 10.20.20.10 -bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
test-separator as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.:::, it can also be parsed through custom delimiter.10/Dec/2017:08:00:00 is %d/%b/%Y:%H:%M:%S.
Example 2: The parsing format of the original timestamp `2017-12-10 08:00:00` is %Y-%m-%d %H:%M:%S.
Example 3: The parsing format of the original timestamp 12/10/2017, 08:00:00 is %m/%d/%Y, %H:%M:%S.status field with the value 400 or 500 after the sample log is parsed in separator mode, you need to configure key as status and the filter rule as 400|500.Key value for parsing failures (which is LogParseFailure by default). All parsing-failed logs are uploaded with the input content as the key name (Key) and the raw log content as the key value (Value).@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
1571394459,http://127.0.0.1/my/course/4|10.135.46.111|200,status:DEAD,
{"processors": [{"type": "processor_split_delimiter","detail": {"Delimiter": ",","ExtractKeys": [ "time", "msg1","msg2"]},"processors": [{"type": "processor_timeformat","detail": {"KeepSource": true,"TimeFormat": "%s","SourceKey": "time"}},{"type": "processor_split_delimiter","detail": {"KeepSource": false,"Delimiter": "|","SourceKey": "msg1","ExtractKeys": [ "submsg1","submsg2","submsg3"]},"processors": []},{"type": "processor_split_key_value","detail": {"KeepSource": false,"Delimiter": ":","SourceKey": "msg2"}}]}]}
time: 1571394459submsg1: http://127.0.0.1/my/course/4submsg2: 10.135.46.111submsg3: 200status: DEAD
Plugin Feature | Plugin Name | Feature Description |
Field extraction | processor_log_string | Performs multi-character (line breaks) parsing of fields, typically for single-line logs. |
Field extraction | processor_multiline | Performs first-line regex parsing of fields (full regex mode), typically for multi-line logs. |
Field extraction | processor_multiline_fullregex | Performs first-line regex parsing of fields (full regex mode), typically for multi-line logs; extracts regexes from multi-line logs. |
Field extraction | processor_fullregex | Extracts fields (full regex mode) from single-line logs. |
Field extraction | processor_json | Expands field values in JSON format. |
Field extraction | processor_split_delimiter | Extracts fields (single-/multi-character separator mode). |
Field extraction | processor_split_key_value | Extracts fields (key-value pair mode). |
Field processing | processor_drop | Discards fields. |
Field processing | processor_timeformat | Parses time fields in raw logs to convert time formats and set parsing results as log time. |
Plugin Name | Support Subitem Parsing | Plugin Parameter | Required | Feature Description |
processor_multiline | No | BeginRegex | Yes | Defines the first-line matching regex for multi-line logs. |
processor_multiline_fullregex | Yes | BeginRegex | Yes | Defines the first-line matching regex for multi-line logs. |
| | ExtractRegex | Yes | Defines the extraction regex after multi-line logs are extracted. |
| | ExtractKeys | Yes | Defines the extraction keys. |
processor_fullregex | Yes | ExtractRegex | Yes | Defines the extraction regex. |
| | ExtractKeys | Yes | Defines the extraction keys. |
processor_json | Yes | SourceKey | No | Defines the name of the upper-level processor key processed by the current processor. |
| | KeepSource | No | Defines whether to retain `SourceKey` in the final key name. |
processor_split_delimiter | Yes | SourceKey | No | Defines the name of the upper-level processor key processed by the current processor. |
| | KeepSource | No | Defines whether to retain `SourceKey` in the final key name. |
| | Delimiter | Yes | Defines the separator (single or multiple characters). |
| | ExtractKeys | Yes | Defines the extraction keys after separator splitting. |
processor_split_key_value | No | SourceKey | No | Defines the name of the upper-level processor key processed by the current processor. |
| | KeepSource | No | Defines whether to retain `SourceKey` in the final key name. |
| | Delimiter | Yes | Defines the separator between the `Key` and `Value` in a string. |
processor_drop | No | SourceKey | Yes | Defines the name of the upper-level processor key processed by the current processor. |
processor_timeformat | No | SourceKey | Yes | Defines the name of the upper-level processor key processed by the current processor. |
| | TimeFormat | Yes | Defines the time parsing format for the `SourceKey` value (time data string in logs). |
define-log as Log Topic Name and click Confirm.[directory prefix expression]/**/[filename expression].Parameter | Description |
Directory Prefix | Directory prefix for log files, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
/**/ | Current directory and all its subdirectories. |
File Name | Log file name, which supports only the wildcard characters * and ?.\* indicates to match any multiple characters.? indicates to match any single character. |
No. | Directory Prefix Expression | Filename Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/nginx/**/access.log. LogListener will listen for log files named access.log in all subdirectories in the /var/log/nginx prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/nginx/**/*.log. LogListener will listen for log files suffixed with .log in all subdirectories in the /var/log/nginx prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/nginx/**/error*. LogListener will listen for log files prefixed with error in all subdirectories in the /var/log/nginx prefix path. |
key:"{"substream":XXX}".log/*.log and rename the old file after log rotation as log/*.log.xxxx.Key and the original log content is the Value for log uploading.Last updated:2024-01-20 17:14:28
Parameter Format | Description | Example |
%a | Abbreviation for a weekday | Fri |
%A | Full name for a weekday | Friday |
%b | Abbreviation for a month | Jan |
%B | Full name for a month | January |
%d | A day of a month (01 to 31) | 31 |
%h | Abbreviation for a month, same as %b | Jan |
%H | An hour in the 24-hour system (00 to 23) | 22 |
%I | An hour in the 12-hour system (01 to 12) | 11 |
%m | Month (01 to 12), with 01 indicating January | 08 |
%M | Minute (00 to 59), with 01 indicating one minute | 59 |
%n | Line break | Line break |
%p | Morning (AM) or afternoon (PM) | AM/PM |
%r | Specific 12-hour combined time format, equivalent to %I:%M:%S %p | 11:59:59 AM |
%R | Specific 24-hour combined time format, equivalent to %H:%M | 23:59 |
%S | Second (00 to 59) | 59 |
%f | Millisecond | 0.123 |
%t | Tab | Tab |
%y | Year, without the century (00 to 99) | 19 |
%Y | Year, with the century, with 2018 indicating the year of 2018 | 2019 |
%C | Century (obtained by dividing the year by 100, ranging from 00 to 99) | 20 |
%e | A day of a month (01 to 31) | 31 |
%j | A day of a year (001 to 366) | 365 |
%u | Weekday represented by a digit (1 to 7), with 1 indicating Monday and 7 indicating Sunday | 1 |
%U | A week of a year (00 to 53), with the weeks starting from Sunday, that is, the first Sunday as the first day of the first week | 23 |
%w | Weekday represented by a digit (0 to 6), with 0 indicating Sunday and 6 indicating Saturday | 5 |
%W | A week of a year (00 to 53), with the weeks starting from Monday, that is, the first Monday as the first day of the first week | 23 |
%s | Second-level (10-digit) UNIX timestamp | 1571394459 |
%F | Millisecond-level (13-bit) UNIX timestamp | 1571394459123 |
%z | Supports time zone parsing for time fields, including ISO 8601 time format and GMT time format | UTC/+0800/MST |
Time Indication Sample | Time Extraction Format |
2018-07-16 13:12:57.123 | %Y-%m-%d %H:%M:%S.%f |
[2018-07-16 13:12:57.012] | [%Y-%m-%d %H:%M:%S.%f] |
06/Aug/2019 12:12:19 +0800 | %d/%b/%Y %H:%M:%S |
Monday, 02-Oct-19 16:07:05 MST | %A, %d-%b-%y %H:%M:%S |
1571394459 | %s |
1571394459123 | %F (LogListener 2.6.4 or later) |
06/Aug/2019 12:12:19 +0800 | %d/%b/%Y %H:%M:%S %z |
Monday, 02-Oct-19 16:07:05 MST | %A, %d-%b-%y %H:%M:%S %z |
Last updated:2025-12-03 11:22:42


__TAG__.{Machine group metadata key}:{Machine group metadata value}.

?<> and reports it together with logs in the form of __TAG__.{Field name}:{Extracted field}. For example, (?<name>.*?) signifies that the field extracted by .*? will be named "name". Up to 5 named capture groups are supported.
__TAG__.{i}:{Extracted field}, where i indicates the serial number of the capture group. Up to 5 non-named capture groups are supported.
/logs| - /appA/userA| - access.log| - /appB/userB| - access.log| - /appC/userC| - access.log
/logs/(.*?)/.*
# /logs/appA/userA/access.log will include a new key value.__TAG__.1: appA# /logs/appB/userB/access.log will include a new key value.__TAG__.1: appB# /logs/appC/userC/access.log will include a new key value.__TAG__.1: appC
/logs/(?<APP>.*?)/(?<USER>.*?)/access.log
# /logs/appA/userA/access.log will include a new key value.__TAG__.APP: appA__TAG__.USER: userA# /logs/appB/userB/access.log will include a new key value.__TAG__.APP: appB__TAG__.USER: userB# /logs/appC/userC/access.log will include a new key value.__TAG__.APP: appC__TAG__.USER: userC


Last updated:2024-01-20 17:14:28
/opt/logs/*.log, you can specify the collection path as /opt/logs and the filename as *.log./opt/logs/service1/*.log and /opt/logs/service2/*.log, you can specify the folder of the collection path as /opt/logs/service* and the file name as *.log.Field | Description |
container_id | ID of the container to which logs belong |
container_name | The name of the container to which logs belong |
image_name | The image name IP of the container to which logs belong |
namespace | The namespace of the Pod to which logs belong |
pod_uid | The UID of the Pod to which logs belong |
pod_name | The name of the Pod to which logs belong |
pod_lable_{label name} | The labels of the Pod to which logs belong (for example, if a Pod has two labels: app=nginx and env=prod, the reported log will have two metadata entries attached: pod_label_app:nginx and pod_label_env:prod). |
Parsing Mode | Description | Documentation |
Full text in a single line | A log contains only one line of content, and the line break `\n` to mark the end of a log. Each log will be parsed into a complete string with CONTENT as the key value. When log Index is enabled, you can search for log content via full-text search. The time attribute of a log is determined by the collection time. | |
Full text in multi lines | A log with full text in multi lines spans multiple lines and a first-line regular expression is used for match. When a log in a line matches the preset regular expression, it is considered as the beginning of a log, and the next matching line will be the end mark of the log. A default key value, CONTENT, will be set as well. The time attribute of a log is determined by the collection time. The regular expression can be generated automatically. | |
Single line - full regex | The single-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete log. When configuring the single-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | |
Multiple lines - full regex | The multi-line - full regular expression mode is a log parsing mode where multiple key-value pairs can be extracted from a complete piece of log data that spans multiple lines in a log text file (such as Java program logs) based on a regular expression. When configuring the multi-line - full regular expression mode, you need to enter a sample log first and then customize your regular expression. After the configuration is completed, the system will extract the corresponding key-value pairs according to the capture group in the regular expression. The regular expression can be generated automatically. | |
JSON | A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break `\n`. | |
Separator | Structure the data in a log with the specified separator, and each complete log ends with a line break `\n`. Define a unique key for each separate field. Leave the field blank if you don’t need to collect it. At least one field is required. |
ErrorCode is 404. You can enable the filter and configure rules as needed.Last updated:2024-01-20 17:14:28
topicId).
For more information, see Managing Log Topic.CLS_HOST) of the region of your log topic.
For details of the CLS domain name list, see Available Regions.TmpSecretId) and API key (TmpSecretKey) required for CLS authentication.
To obtain the API key and API key ID, go to Manage API Key.wget command to download the LogConfig.yaml CRD declaration file, using the master node path /usr/local/ as an example.wget https://mirrors.tencent.com/install/cls/k8s/LogConfig.yaml
LogConfig.yaml declaration file consists of the following two parts:clsDetail: The configuration for shipping to CLS.inputDetail: Log source configuration.apiVersion: cls.cloud.tencent.com/v1kind: LogConfig ## Default valuemetadata:name: test ## CRD resource name, which is unique in the cluster.spec:clsDetail: ## The configuration for shipping to CLS...inputDetail: ## Log source configuration...
clsDetail:# You need to specify the logset and topic names to automatically create a log topic, which cannot be modified after being defined.logsetName: test ## The name of the CLS logset. If there is no logset with this name, one will be created automatically. If there is such a logset, a log topic will be created under it.topicName: test ## The name of the CLS log topic. If there is no log topic with this name, one will be created automatically.# Select an existing logset and log topic. If the logset is specified but the log topic is not, a log topic will be created automatically, which cannot be modified after being defined.logsetId: xxxxxx-xx-xx-xx-xxxxxxxx ## The ID of the CLS logset. The logset needs to be created in advance in CLS.topicId: xxxxxx-xx-xx-xx-xxxxxxxx ## The ID of the CLS log topic. The log topic needs to be created in advance in CLS and not occupied by other collection configurations.region: ap-xxx ## Topic region for cross-region shipping# Define the log topic configuration when a log topic is created automatically. The configuration cannot be modified after being defined.period: 30 ## Lifecycle in days. Value range: 1–3600. `3640` indicates permanent storage.storageType: hot. ## Log topic storage class. Valid values: `hot` (STANDARD); `cold` (STANDARD_IA). Default value: `hot`.HotPeriod: 7 ## Transition cycle in days. Value range: 1–3600. It is valid only if `storageType` is `hot`.partitionCount: ## The number (an integer) of log topic partitions. Default value: `1`. Maximum value: `10`.autoSplit: true ## Whether to enable auto-split (Boolen type). Default value: `true`.maxSplitPartitions: 10 ## The maximum number (an integer) of partitionstags: ## Tag description list. This parameter is used to bind a tag to a log topic. Up to nine tag key-value pairs are supported, and a resource can be bound to only one tag key.- key: xxx ## Tag keyvalue: xxx ## Tag value# Define collection ruleslogType: json_log ## Log parsing format. Valid values: `json_log` (JSON); `delimiter_log` (separator); `minimalist_log` (full text in a single line); `multiline_log` (full text in multi lines); `fullregex_log` (single line - full regex); `multiline_fullregex_log` (multiple lines - full regex). Default value: `minimalist_log`.logFormat: xxx ## Log formatting methodexcludePaths: ## Collection path blocklist- type: File ## Type. Valid values: `File`, `Path`.value: /xx/xx/xx/xx.log ## The value of `type`userDefineRule: xxxxxx ## Custom collection rule, which is a serialized JSON string.extractRule: {} ## Extraction and filter rule. If `ExtractRule` is set, `LogType` must be set. For more information, see the extractRule description.AdvancedConfig: ## Advanced collection configurationMaxDepth: 1 ## Maximum number of directory levelsFileTimeout: 60 ## File timeout attribute# Define index configuration, which cannot be modified then.indexs: ## You can customize the index method and field when creating a topic.- indexName: ## The field for which to configure the key value or meta field index. You don't need to add the `__TAG__.` prefix to the key of the meta field and can just use that of the corresponding field when uploading a log, as the `__TAG__.` prefix will be automatically added for display in the Tencent Cloud console.indexType: ## Field type. Valid values: `long`, `text`, `double`.tokenizer: ## Field delimiter. Each character represents a delimiter. Only English symbols and \n\t\r are supported. For `long` and `double` fields, leave it empty. For `text` fields, we recommend that you use @&?|#()='",;:<>[]{}/ \n\t\r\ as the delimiter.sqlFlag: ## Whether the analysis feature is enabled for the field (Boolen)containZH: ## Whether Chinese characters are contained (Boolen)
Name | Type | Required | Description |
timeKey | String | No | The specified field in the log to be used as the log timestamp. If the configuration is empty, the actual log collection time will be used. time_key and time_format must appear in pairs. |
timeFormat | String | No | Time field format. For more information, see the output parameters of the time format description of the strftime function in C programming language. |
delimiter | String | No | The delimiter for delimited logs, which is valid only if log_type is delimiter_log. |
logRegex | String | No | Full log matching rule, which is valid only if log_type is fullregex_log. |
beginningRegex | String | No | First-line matching rule, which is valid only if log_type is multiline_log or multiline_fullregex_log. |
unMatchUpload | String | No | Whether to upload the logs failed to be parsed. Valid values: true (yes); false (no). |
unMatchedKey | String | No | The key of the log failed to be parsed. |
backtracking | String | No | The size of the data to be rewound in incremental collection mode. Valid values: -1 (full collection); 0 (incremental collection). Default value: -1. |
keys | Array of String | No | The key name of each extracted field. An empty key indicates to discard the field. This parameter is valid only if log_type is delimiter_log, fullregex_log, or multiline_fullregex_log. json_log logs use the key of JSON itself. |
filterKeys | Array of String | No | Log keys to be filtered, which correspond to FilterRegex by subscript. |
filterRegex | Array of String | No | The regex of the log keys to be filtered, which corresponds to FilterKeys by subscript. |
isGBK | String | No | Whether it is GBK-encoded. Valid values: 0 (no); 1 (yes).Note: This field may return null, indicating that no valid values can be obtained. |
\n to mark the end of a log. For easier structural management, a default key value \_\_CONTENT\_\_ is given to each log, but the log data itself will no longer be structured, nor will the log field be extracted. The time attribute of a log is determined by the collection time.Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Single-line loglogType: minimalist_log
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
\n cannot be used to mark the end of a log. To help CLS distinguish between logs, a first-line regular expression is used for matching. When a line of a log matches the preset regular expression, it is considered as the beginning of the log, and the log ends before the next matching line.\_\_CONTENT\_\_ is also set, but the log data itself is not structured, and no log fields are extracted. The time attribute of a log is determined by the collection time.2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:java.lang.NullPointerExceptionat com.test.logging.FooFactory.createFoo(FooFactory.java:15)at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multi-line loglogType: multiline_logextractRule:# Only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\n` to the end of the current log.beginningRegex: \d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s.+
__CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\njava.lang.NullPointerException\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Full RegexlogType: fullregex_logextractRule:# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: (\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*beginningRegex: (\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys: ['remote_addr','time_local','request_method','request_url','http_protocol','http_host','status','request_length','body_bytes_sent','http_referer','http_user_agent','request_time','upstream_response_time']
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Multiple lines - full regexlogType: multiline_fullregex_logextractRule:# The first-line full regular expression: only a line that starts with a date time is considered the beginning of a new log. Otherwise, add the line break `\n` to the end of the current log.beginningRegex: \[\d+-\d+-\w+:\d+:\d+,\d+\]\s\[\w+\]\s.*# Regular expression, in which the corresponding values will be extracted based on the `()` capture groupslogRegex: \[(\d+-\d+-\w+:\d+:\d+,\d+)\]\s\[(\w+)\]\s(.*)# List of extracted keys, which are in one-to-one correspondence with the extracted valueskeys: ['time','level','msg']
time: 2018-10-01T10:30:01,000`level: INFO`msg: java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\n.{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# JSON loglogType: json_log
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
\n. When CLS processes separator logs, you need to define a unique key for each separate field.10.20.20.10 ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:clsDetail:topicId: xxxxxx-xx-xx-xx-xxxxxxxx# Separator loglogType: delimiter_logextractRule:# Separatordelimiter: ':::'# List of extracted keys, which are in one-to-one correspondence to the separated fieldskeys: ['IP','time','request','host','status','length','bytes','referer']
IP: 10.20.20.10bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
inputDetail:type: container_stdout ## Log collection type. Valid values: `container_stdout` (container standard output); `container_file` (container file); `host_file` (host file).containerStdout: ## Container standard output configuration, which is valid only if `type` is `container_stdout`.namespace: default ## The Kubernetes namespace of the container to be collected. Separate multiple namespaces by `,`, for example, `default,namespace`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `excludeNamespace` is specified.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by `,`, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## The namespace label for filtering namespacesallContainers: false ## Whether to collect the standard output of all containers in the specified namespace. Note that if `allContainers=true`, you cannot specify `workload`, `includeLabels`, and `excludeLabels` at the same time.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).container: xxx ## The name of the container to be or not to be collectedincludeLabels: ## The labels of the Pods to be collected. This field cannot be specified if `workload` is specified.key: value1 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate multiple values by comma. If `excludeLabels` is also specified, Pods in the intersection will be matched.excludeLabels: ## The labels of the Pods to be excluded. This field cannot be specified if `workload`, `namespace`, and `excludeNamespace` are specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If `includeLabels` is also specified, Pods in the intersection will be matched.metadataLabels: ## The Pod labels to be collected as metadata. If this field is not specified, all Pod labels will be collected as metadata.- label1metadataContainer: ## The container environments of the metadata to be collected. If this field is not specified, metadata (`namespace`, `pod_name`, `pod_ip`, `pod_uid`, `container_id`, `container_name`, and `image_name`) of all container environments will be collected.- namespacecustomLabels: ## Custom metadatalabel: l1workloads: ## The workloads of the specified workload types in the specified namespaces of the containers of the logs to be collected- container: xxx ## The name of the container to be collected. If this field is not specified, all containers in the workload Pod will be collected.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).kind: deployment ## Workload type. Valid values: `deployment`, `daemonset`, `statefulset`, `job`, `cronjob`.name: sample-app ## Workload namenamespace: prod ## Workload namespacecontainerFile: ## Container file configuration, which is valid only if `type` is `container_file`.namespace: default ## The Kubernetes namespace of the container to be collected. You must specify a namespace.excludeNamespace: nm1,nm2 ## The Kubernetes namespace of the container to be excluded. Separate multiple namespaces by `,`, for example, `nm1,nm2`. If this field is not specified, it indicates all namespaces. Note that this field cannot be specified if `namespace` is specified.nsLabelSelector: environment in (production),tier in (frontend) ## The namespace label for filtering namespacescontainerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).container: xxx ## The name of the container to be collected. If it is `*`, it indicates the names of all containers to be collected.logPath: /var/logs ## Log folder. Wildcards are not supported.filePattern: app_*.log ## Log filename. Wildcards `*` and `?` are supported. `*` indicates to match any number of characters, while `?` indicates to match any single character.includeLabels: ## The labels of the Pods to be collected. This field cannot be specified if `workload` is specified.key: value1 ## The `metadata` will be carried in the log collected based on the collection rule and reported to the consumer. Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be matched. Separate values by comma. If `excludeLabels` is also specified, Pods in the intersection will be matched.excludeLabels: ## Pods with the specified labels will be excluded. This field cannot be specified if `workload` is specified.key2: value2 ## Pods with multiple values of the same key can be matched. For example, if you enter `environment = production,qa`, Pods with the `production` or `qa` value of the `environment` key will be excluded. Separate multiple values by comma. If `includeLabels` is also specified, Pods in the intersection will be matched.metadataLabels: ## The Pod labels to be collected as metadata. If this field is not specified, all Pod labels will be collected as metadata.- namespacemetadataContainer: ## The container environments of the metadata to be collected. If this field is not specified, metadata (`namespace`, `pod_name`, `pod_ip`, `pod_uid`, `container_id`, `container_name`, and `image_name`) of all container environments will be collected.customLabels: ## Custom metadatakey: valueworkload:container: xxx ## The name of the container to be collected. If this field is not specified, all containers in the workload Pod will be collected.containerOperator: in ## Container selection method. Valid values: `in` (include); `not in` (exclude).kind: deployment ## Workload type. Valid values: `deployment`, `daemonset`, `statefulset`, `job`, `cronjob`.name: sample-app ## Workload namenamespace: prod ## Workload namespacehostFile: ## Node file path, which is valid only if `type` is `host_file`.filePattern: '*.log' ## Log filename. Wildcards `*` and `?` are supported. `*` indicates to match any number of characters, while `?` indicates to match any single character.logPath: /tmp/logs ## Log folder. Wildcards are not supported.customLabels: ## Custom metadatalabel1: v1
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: defaultallContainers: true...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:allContainers: falseworkloads:- namespace: productionname: ingress-gatewaykind: deployment...
apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_stdoutcontainerStdout:namespace: productionallContainers: falseincludeLabels:k8s-app: nginx...
access.log file in the /data/nginx/log/ path in the NGINX container in the Pod that belongs to ingress-gateway deployment in the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_filecontainerFile:namespace: productionworkload:name: ingress-gatewaykind: deploymentcontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
access.log file in the /data/nginx/log/ path in the NGINX container in the Pod whose pod labels contain "k8s-app=ingress-gateway" in the production namespaceapiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: container_filecontainerFile:namespace: productionincludeLabels:k8s-app: ingress-gatewaycontainer: nginxlogPath: /data/nginx/logfilePattern: access.log...
.log files in the host path /data/apiVersion: cls.cloud.tencent.com/v1kind: LogConfigspec:inputDetail:type: host_filehostFile:logPath: /datafilePattern: *.log...
LogConfig.yaml declaration file is defined in Step 2. Define the LogConfig object, you can run the kubectl command to create a LogConfig object based on the file.kubectl create -f /usr/local/LogConfig.yaml
Last updated:2025-12-03 11:22:42










/opt/logs/*.log, you can specify the directory prefix as /opt/logs and the file name as *.log.
/opt/logs/*.log, you can specify the directory prefix as /opt/logs and the file name as *.log.

Field Name | Description |
container_id | ID of the container to which the log belongs. |
container_name | Name of the container to which the log belongs |
image_name | Image name/IP address of the container to which the log belongs. |
namespace | Namespace of the Pod to which the log belongs. |
pod_uid | UID of the Pod to which the log belongs. |
pod_name | Name of the Pod to which the log belongs. |
pod_ip | IP address of the Pod to which the log belongs. |
pod_lable_{label name} | Label of the Pod to which the log belongs. For example, if a Pod has two labels, app=NGINX and env=prod, the uploaded log will be accompanied by two metadata entries, pod_label_app:nginx and pod_label_env:prod. |

Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:java.lang.NullPointerExceptionat com.test.logging.FooFactory.createFoo(FooFactory.java:15)at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s.+
__CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\njava.lang.NullPointerException\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
(\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.*
\[(\d+-\d+-\w+:\d+:\d+,\d+)\]\s\[(\w+)\]\s(.*)
() capture group, you can customize the key name of each group as follows:time: 2018-10-01T10:30:01,000`level: INFO`msg: java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
10.20.20.10 - ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
:::, this log will be divided into eight fields, and each of these fields will be assigned a unique key, as shown below:IP: 10.20.20.10 -bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
1571394459, http://127.0.0.1/my/course/4|10.135.46.111|200, status:DEAD,
{"processors": [{"type": "processor_split_delimiter","detail": {"Delimiter": ",","ExtractKeys": [ "time", "msg1","msg2"]},"processors": [{"type": "processor_timeformat","detail": {"KeepSource": true,"TimeFormat": "%s","SourceKey": "time"}},{"type": "processor_split_delimiter","detail": {"KeepSource": false,"Delimiter": "|","SourceKey": "msg1","ExtractKeys": [ "submsg1","submsg2","submsg3"]},"processors": []},{"type": "processor_split_key_value","detail": {"KeepSource": false,"Delimiter": ":","SourceKey": "msg2"}}]}]}
time: 1571394459submsg1: http://127.0.0.1/my/course/4submsg2: 10.135.46.111submsg3: 200status: DEAD
ErrorCode is set to 404. You can enable the filter and configure rules as needed.
Name | Description | Configuration Item |
Timeout property | This configuration controls the timeout for log files. If a log file has no updates within the specified time, it is considered timed out. LogListener will stop collecting from that timed-out log file. If you have a large number of log files, it is recommended to reduce the timeout to avoid LogListener performance waste. | No timeout: Log files never time out. Custom: The timeout for log files can be customized. |
Maximum directory levels | The configuration controls the maximum directory depth for log collection. LogListener does not collect log files in directories that exceed the specified maximum directory depth. If your target collection path includes fuzzy matching, it is recommended to configure an appropriate maximum directory depth to avoid LogListener performance waste. | An integer greater than 0. 0 means no drilling down into subdirectories. |
Settings of logs with parsing and merging failure | Note: The feature for merging logs that failed to be parsed can only be configured for LogListener 2.8.8 and later versions. This configuration allows LogListener to merge the logs that have continuously failed to be parsed in the target log file into a single log for upload during collection. If your first-line regular expression does not cover all multi-line logs, it is recommended to enable this configuration. This helps avoid the situation where a multi-line log, which fails the first-line match, gets split into multiple individual log entries. | Enable/Disable |










Last updated:2025-11-19 16:29:16



Configuration Item | Type | Description |
Collection Rule Name | Input Box | Input the name of this collection rule. |
Network type | Radio | Specify the Syslog transport protocol: UDP/TCP. |
Resolution Protocol | Radio | Specifies the protocol for log parsing. It is empty by default, indicating no parsing. where: rfc3164: specifies the use of RFC3164 protocol to parse log. rfc5424: specifies the use of RFC5424 protocol to parse log. auto: automatically select the appropriate parsing protocol. |
Listening Address | Input Box | The specified Syslog forwarding address and port are in the format [ip]:[port]. Collect local machine scenario: configure forwarding address as 127.0.0.1, port can be a random idle port, such as 127.0.0.1:9000. Cross-host collection scenario: if you use Syslog forwarding, see rsyslog forwarding configuration. |
Upload resolution-failed logs | Switch | Specify the operation upon parsing failure. If enabled, return the full text of the log based on the input key. Configure as false to discard logs when parsing fails. |
Key Name of Parsing-Failed Logs | Input Box | Specified key name of failed parsing. |


Field | Description |
HOSTNAME | Host name. The current host name will be obtained if it is not provided in the log. |
program | tag field in the protocol. |
priority | priority field in the protocol. |
facility | facility field in the protocol. |
severity | severity field in the protocol. |
timestamp | Timestamp of the log. |
content | Log content, which will contain all the content of unparsed logs if parsing fails. |
SOURCE | IP of the current host. |
client_ip | Client IP address for log transfer. |
*.* @@127.0.0.1:1000
sudo service rsyslog restart
Last updated:2025-04-18 16:20:44






Configuration Item | Required | Description |
Event channel | Yes | It indicates the event channel designated for target collection, with the following configuration options available: Application (application event): Records events generated by applications, such as software crashes, configuration changes, and error messages. System (system event): Records events related to operating system components, such as drivers, system services, and hardware issues. Security (security event): Records events related to security, such as user logins/logouts, permission changes, and audit policy changes. Setup (configuration event): Records events related to system setup and configuration changes. ALL (all events). Note: It is recommended that each event channel on a server be dedicated to a single collection configuration. Using the same event channel for multiple collection configurations can result in data duplication. |
Start time | Yes | The following two options are supported: Custom time: Event logs will be collected starting from the time you specify. Full collection: All event logs from the server will be collected. Note: If an event exceeds the retention period set by the Windows system, its logs will not be collected. |
Custom Time | Yes | It is required to specify the time for collecting event logs when Start time is set to Custom time. |
Event ID | No | Support positive filtering for specific values (such as 20) or value ranges (such as 0-20), as well as negative filtering for individual values (such as -20). Multiple filter criteria can be separated by commas. For example, "1-200,-100" indicates that event logs will be collected within the range of 1-200, excluding those with an event ID of 100. |




Field Name | Description |
computer_name | Name of the node that generates the current event. |
keywords | Keyword associated with the current event, used for event categorization. |
level | Level of the current event. |
channel | Channel name of the current event. |
event_data | Data related to the current event. |
message | Messages associated with the current event. |
opcode | Operation code associated with the current event. |
process.pid | Process ID of the current event. |
type | API used to obtain the current event. |
version | Version number of the current event. |
record_id | Record number associated with the current event. |
event_id | ID of the current event. |
task | Task associated with the current event. |
provider_guid | Global transaction ID of the current event's source. |
activity_id | Global transaction ID of the event's associated activity. All events occurring within this activity will share the same global transaction ID. |
process.thread.id | Thread ID of the current event. |
provider_name | Source of the current event. |
raw_data | Original information of the current event, in XML format. |
Last updated:2025-12-03 11:22:42
log_format main '$remote_addr - $remote_user [$time_local] ''"$request" $status $body_bytes_sent ''"$http_referer" "$http_user_agent"';
Field Name | Description |
remote_addr | Client IP address. |
remote_user | Client name. |
time_local | Local server time. |
request | HTTP request method and URL. |
status | HTTP request status code. |
body_bytes_sent | Number of bytes sent to the client. |
http_referer | Page URL of the access source. |
http_user_agent | Client browser information. |










/. For Windows systems, the file path must start with a drive letter, such as C:\./[Directory prefix expression]/**/[File name expression]. Example: /data/log/**/*.log.[Drive letter]:\[Directory prefix expression]\**\[File name expression]. Example: C:\Program Files\Tencent\...\*.log.Field | Description |
Directory Prefix | Directory structure prefix of the log file. Only the wildcards * and ? are supported. * matches multiple arbitrary characters. ? matches a single arbitrary character. Commas (,) are not supported. |
** | Indicates the current directory and all subdirectories. |
File Name | Log file name. Only the wildcards * and ? are supported. * matches multiple arbitrary characters. ? matches a single arbitrary character. Commas (,) are not supported. |
No. | Directory Prefix Expression | File Name Expression | Description |
1. | /var/log/nginx | access.log | In this example, the log path is configured as /var/log/NGINX/**/access.log. LogListener will listen to all log files named access.log in the subdirectories under the /var/log/NGINX prefix path. |
2. | /var/log/nginx | *.log | In this example, the log path is configured as /var/log/NGINX/**/*.log. LogListener will listen to all log files ending with .log in the subdirectories under the /var/log/NGINX prefix path. |
3. | /var/log/nginx | error* | In this example, the log path is configured as /var/log/NGINX/**/error*. LogListener will listen to all log files starting with error in the subdirectories under the /var/log/NGINX prefix path. |
log/*.log and rename the old file after log rotation as log/*.log.xxxx.

log_format main '$remote_addr - $remote_user [$time_local] ''"$request" $status $body_bytes_sent ''"$http_referer" "$http_user_agent"';
(\S+)\s*-\s*(\S+)\s*\[(\d+\S+\d+:\d+:\d+:\d+)\s+\S+\]\s*\"(\S+)\s+(\S+)\s+\S+\"\s*(\S+)\s*(\S+)\s*\"([^"]*)\"\s*\"([^"]*)\".*
59.x.x.x - - [06/Aug/2019:12:12:19 +0800] "GET /nginx-logo.png HTTP/1.1" 200 368 "http://119.x.x.x/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36" "-"



Name | Description | Configuration Item |
Timeout property | This configuration controls the timeout for log files. If a log file has no updates within the specified time, it is considered timed out. LogListener will stop collecting from that timed-out log file. If you have a large number of log files, it is recommended to reduce the timeout to avoid LogListener performance waste. | No time out: Log files never time out. Custom: The timeout for log files can be customized. |
Maximum directory levels | /**/ in the collection path represents searching through all subdirectories for files. However, if you do not want to search too deeply into subdirectories, you can use the Maximum Directory Depth configuration item to limit the search depth. | An integer greater than 0. 0 means no drilling down into subdirectories. |

Last updated:2025-12-03 11:22:42
REPLICATION SLAVE, REPLICATION CLIENT, and SELECT permissions has been created.



Parameter | Required | Description |
MySQL Type | Yes | MySQL type to which you want to subscribe. Supported types include: Self-built MySQL |
MySQL Instance | Yes | If the MySQL type is TencentDB for MySQL or TDSQL-C for MySQL, you can select the target TencentDB for MySQL instance from the drop-down options. Note: Currently, you can only select a TencentDB for MySQL instance in the same region as the log topic. To subscribe to binlogs from a MySQL instance in a different region, set MySQL Type to Self-built MySQL and subscribe to binlogs through the public network. |
Access mode | Yes | If the MySQL type is set to Self-built MySQL, you can choose to access your MySQL through the private network address or public network address. |
Network | Yes | If the MySQL type is set to Self-built MySQL and the private network address is used as the access method, you need to specify the VPC network of the MySQL instance. Note: The VPC network should be in the same region as the log topic. |
Network service type | Yes | If the MySQL type is set to Self-built MySQL and the private network address is used as the access method, you need to specify the network service type of the target MySQL: If your MySQL needs to be accessed through Cloud Load Balancer (CLB), select CLB. If your MySQL server allows direct access, select CVM. |
Private/Public Network Access Address | Yes | If the MySQL type is set to Self-Built MySQL, specify the private network access address or public network access address of MySQL based on the selected access method. |
MySQL Port | Yes | If the MySQL type is set to Self-Built MySQL, specify the MySQL port. |
Username | Yes | Specify the username for accessing MySQL. Note: The following permissions need to be enabled for the account: REPLICATION SLAVE, REPLICATION CLIENT, and SELECT. |
Password | Yes | Specify the password for accessing MySQL. |

Parameter | Required | Description |
Subscription Rule Name | Yes | Specify the name of the current subscription rule. |
Start position | Yes | Specify the starting position for binlog subscription. The starting position can be defined using one of the following 3 methods: Latest position: collects binlogs from the latest position. Specified position: starts collecting binlogs from the specified position. Specified GTID: starts collecting binlogs from the specified GTID (transaction ID) position. |
Binlog File Name | Yes | If Starting Position is set to the specified position, you need to specify the binlog file name, such as mysql-bin.000005. |
Starting Binlog Position | Yes | If Starting Position is set to the specified position, you need to specify the position in the binlog file to start collecting data. For example, to collect the starting index of the Nth log entry in mysql-bin.000005, specify the position as the position+size of the (N-1)th log entry (where position is the starting index of the binlog entry in the binlog file, and size is the size of the binlog entry). To collect data from the beginning of the file, specify the position as 0. Note: Ensure that the position exactly matches the starting index of a binlog entry. Otherwise, the subscription may fail. |
Starting GTID | Yes | If Starting Position is set to thespecified GTID, you need to specify the starting GTID (transaction ID). |
Event Type | Yes | Type of the binlog event that needs to be collected. The following 4 event types are supported: DDL: data definition language Insert: insertion Update: update Delete: deletion |
Event Metadata | No | The following 6 types of binlog-related metadata are supported for selection to be uploaded along with the logs: log_name: binlog file name position: starting index of the binlog entry in the binlog file size: binlog size server_id: secondary server ID gtid: transaction ID task_id: current subscription task ID Note: Binlogs for DDL events do not support size metadata. |
Timestamp | Yes | The following 2 types of time can be selected as the timestamp of the collected binlog: Collection time: time when the binlog is collected. Event time: time when the event corresponding to the binlog occurs. |
Database Table Allowlist | No | When this option is enabled, the binlogs of the specified database tables are collected. When it is disabled, the binlogs of all databases and tables are collected. Multiple allowlists can be configured, each requiring the following information: Database name: name of the database associated with the binlog to be collected. Database table: name of the database table associated with the binlog to be collected. Multiple names can be specified. Note: DDL event binlog collection is not subject to allowlist restrictions. |
Database Table Blocklist | No | When this option is enabled, the binlogs of specified databases and tables can be ignored and are not collected. Multiple blocklists can be configured, each requiring the following information: Database name: name of the database associated with the binlog to be ignored. Database table: name of the database table associated with the binlog to be ignored. Multiple names can be specified. Note: DDL event binlog collection is not subject to blocklist restrictions. |
Flatten Event Data | No | If this option is enabled, different events will be split into multiple logs, and operated fields will be tiled in each log. For example, if two database operations are performed, one update to fields a and b, and one update to fields d and e, two logs will be generated with values a:xxx, old_a:xxx, b:xxx, old_b:xxx and d:xxx, old_d:xxx, e:xxx, old_e:xxx respectively. If this option is disabled, event data will be centrally packaged into two fields, old_data and data, in array+JSON format. For example, if two database operations are performed, one update to fields a and b, and one update to fields d and e, the log value will be old_data:[{a:xxx,b:xxx}, {d:xxx,e:xxx}], data:[{a:xxx,b:xxx}, {d:xxx,e:xxx}]. |
Preview Binlog | No | Click to obtain the first binlog entry that matches the subscription type from the database and display its content. |

Field | Description |
db | Database name. |
table | Table name. |
query | query |
type | Event Type |
after | When type is set to insert, update, or delete, this field contains the affected fields and result values, wrapped in a JSON format. |
before | When type is set to update, this field contains the affected fields and their values before the update, wrapped in a JSON format. |
errorCode | When type is set to DDL, this field contains the error code from the execution of the DDL statement. |
executionTime | When type is set to DDL, this field contains the time taken to execute the DDL statement. |
Field | Description |
log_name | Binlog File Name |
position | Starting index of the binlog entry in the binlog file. |
size | Binlog size. |
server_id | Slave ID |
gtid | Transaction ID. |
task_id | Subscription rule task ID. |

Last updated:2025-11-26 10:31:16
Parameter | Description |
Authentication Mechanism | Currently support SASL_PLAINTEXT. |
hosts | The CLS Kafka address is configured according to the region of the target write log topic. See CLS Kafka address. |
topic | CLS Kafka topic name, configured as the log topic ID. Example: 76c63473-c496-466b-XXXX-XXXXXXXXXXXX. |
username | CLS Kafka access user name, configured as the logset ID. Example: 0f8e4b82-8adb-47b1-XXXX-XXXXXXXXXXXX. |
password | CLS Kafka access password, format ${secret_id}#${secret_key}, such as: XXXXXXXXXXXXXX#YYYYYYYY. For key information acquisition, visit key acquisition. Please ensure the associated account has appropriate Kafka Protocol Log Upload Permission.To upload anonymously, the format is topic_id#${log topic ID}, for example: topic_id#76c63473-c496-466b-XXXX-XXXX. Note: The target log topic must enable Anonymous upload, and select Log upload via Kafka under Anonymous operation. For details, see Log Topic. ![]() |
header | Define the parsing behavior when uploading worklogs by using Kafka protocol. json_remove_escape: whether to perform JSON parsing with escape removal, value is true or false, default to false if not specified. time_key: the specified time field in logs, means select the specified field as log collection time. time_format: when time_key is configured, additional configuration is required for the time parsing format of the specified field. For details, see configure time format. |
Access Method | CLS Kafka Address |
Private Network | ${region}-producer.cls.tencentyun.com:9095 |
Public Network | ${region}-producer.cls.tencentcs.com:9096 |
output.kafka:enabled: truehosts: ["${region}-producer.cls.tencentyun.com:9095"] # TODO service address; public network port 9096, private network port 9095topic: "${ClsTopicID}" # TODO log topic IDversion: "1.0.0"compression: "${compress}" # TODO configure compression mode, support gzip, snappy, lz4, such as "lz4"username: "${ClslogsetID}" # TODO logset ID# For anonymous upload, password: "topic_id#${log topic ID}"password: "${secret_id}#${secret_key}"
output.kafka:enabled: truehosts: ["${region}-producer.cls.tencentyun.com:9095"] # TODO service address; public network port 9096, private network port 9095topic: "${ClsTopicID}" # TODO log topic IDversion: "0.11.0.2"compression: "${compress}" # TODO configure compression mode, support gzip, snappy, lz4, such as "lz4"username: "${ClslogsetID}" # TODO logset ID# For anonymous upload, password: "topic_id#${log topic ID}"password: "${secret_id}#${secret_key}"
kafka: client has run out of available brokers to talk to, it is recommended to upgrade the version to 1.0.0.output {kafka {topic_id => "${ClstopicID}"bootstrap_servers => "${region}-producer.cls.tencentyun.com:${port}"sasl_mechanism => "PLAIN"security_protocol => "SASL_PLAINTEXT"compression_type => "${compress}"# For anonymous upload, password='topic_id#${log topic ID}'sasl_jaas_config => "org.apache.kafka.common.security.plain.PlainLoginModule required username='${ClslogsetID}' password='${secret_id}#${secret_key}';"codec => json}}
<match *>@type rdkafka2# brokers setting# For TODO domain name, refer to https://cloud.tencent.com/document/product/614/18940. Pay attention to the private network port 9095 and public network port 9096.brokers "${domain}:${port}" # e.g. gz-producer.cls.tencentyun.com:9095# topic settings# TODO replace log topic IDtopic "${topic_id}"# saslrdkafka_options {"sasl.mechanism": "PLAIN","security.protocol": "sasl_plaintext",# TODO logset ID of the topic"sasl.username": "${logset_id}",# TODO key of the topic's owner uin, format ${secret_id}#${secret_key}; for anonymous upload, format topic_id#${log topic ID}"sasl.password": "${secret_id}#${secret_key}"}required_acks 1compression_codec gzip<format>@type json</format><buffer tag>flush_at_shutdown trueflush_mode intervalflush_interval 1schunk_limit_size 3MBchunk_full_threshold 1total_limit_size 1024MBoverflow_action block</buffer></match>
[OUTPUT]Name kafkaMatch *# For TODO domain name, refer to https://cloud.tencent.com/document/product/614/18940. Pay attention to the private network port 9095 and public network port 9096.Brokers ${domain}:${port} # e.g. gz-producer.cls.tencentyun.com:9095# TODO replace log topic IDTopics ${topic_id}# The maximum size of TODO request message, not more than 5M.rdkafka.message.max.bytes 5242880rdkafka.sasl.mechanisms PLAINrdkafka.security.protocol sasl_plaintext# TODO Select the value of acks based on the usage scenariordkafka.acks 1# TODO configuration compression moderdkafka.compression.codec lz4# TODO logset ID of the topicrdkafka.sasl.username ${logset_id}# TODO key of the topic's owner uin, format ${secret_id}#${secret_key}; for anonymous upload, format topic_id#${log topic ID}rdkafka.sasl.password ${secret_id}#${secret_key}
import ("fmt""github.com/Shopify/sarama")func main() {config := sarama.NewConfig()config.Net.SASL.Mechanism = "PLAIN"config.Net.SASL.Version = int16(1)config.Net.SASL.Enable = true// TODO logset IDconfig.Net.SASL.User = "${logsetID}"// TODO format: ${secret_id}#${secret_key}. For anonymous upload, format: topic_id#${log topic ID}config.Net.SASL.Password = "${secret_id}#${secret_key}"config.Producer.Return.Successes = true# TODO Select the value of acks based on the usage scenarioconfig.Producer.RequiredAcks = ${acks}config.Version = sarama.V1_1_0_0// TODO configuration compression modeconfig.Producer.Compression = ${compress}// TODO service address: public network port 9096, private network port 9095producer, err := sarama.NewSyncProducer([]string{"${region}-producer.cls.tencentyun.com:9095"}, config)if err != nil {panic(err)}msg := &sarama.ProducerMessage{Topic: "${topicID}", // TODO log topic IDValue: sarama.StringEncoder("goland sdk sender demo"),}// Send the message.for i := 0; i <= 5; i++ {partition, offset, err := producer.SendMessage(msg)if err != nil {panic(err)}fmt.Printf("send response; partition:%d, offset:%d\n", partition, offset)}_ = producer.Close()}
from kafka import KafkaProducerif __name__ == '__main__':produce = KafkaProducer(# TODO service address: public network port 9096, private network port 9095bootstrap_servers=["${region}-producer.cls.tencentyun.com:9095"],security_protocol='SASL_PLAINTEXT',sasl_mechanism='PLAIN',# TODO logset IDsasl_plain_username='${logsetID}',# TODO format: ${secret_id}#${secret_key}. For anonymous upload, format: topic_id#${log topic ID}sasl_plain_password='${secret_id}#${secret_key}',api_version=(0, 11, 0),# TODO configuration compression modecompression_type="${compress_type}",)for i in range(0, 5):# sendMessage TODO log topic IDfuture = produce.send(topic="${topicID}", value=b'python sdk sender demo')result = future.get(timeout=10)print(result)
<dependencies><!--https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients--><dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.11.0.2</version></dependency></dependencies>
import org.apache.kafka.clients.producer.*;import java.util.Properties;import java.util.concurrent.ExecutionException;import java.util.concurrent.Future;import java.util.concurrent.TimeUnit;import java.util.concurrent.TimeoutException;public class ProducerDemo {public static void main(String[] args) throws InterruptedException, ExecutionException, TimeoutException {// 0. Configure a series of parameters.Properties props = new Properties();// TODO when usingprops.put("bootstrap.servers", "${region}-producer.cls.tencentyun.com:9095");// TODO The following values are set according to the business scene.props.put("acks", ${acks});props.put("retries", ${retries});props.put("batch.size", ${batch.size});props.put("linger.ms", ${linger.ms});props.put("buffer.memory", ${buffer.memory});props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "${compress_type}"); // TODO configuration compression modeprops.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");props.put("security.protocol", "SASL_PLAINTEXT");props.put("sasl.mechanism", "PLAIN");// TODO username logsetID; password is combination of secret_id and secret_key, format ${secret_id}#${secret_key},// For anonymous upload, password is topic_id#${log topic ID}props.put("sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule required username='${logsetID}' password='${secret_id}#${secret_key}';");// 1. Create a producer objectProducer<String, String> producer = new KafkaProducer<String, String>(props);// 2. Call the send method TODO log topic IDFuture<RecordMetadata> meta = producer.send(new ProducerRecord<String, String>("${topicID}", ${message}));RecordMetadata recordMetadata = meta.get(${timeout}, TimeUnit.MILLISECONDS);System.out.println("offset = " + recordMetadata.offset());// 3. Close the producer.producer.close();}}
// https://github.com/edenhill/librdkafka - master#include <iostream>#include <librdkafka/rdkafka.h>#include <string>#include <unistd.h>#define BOOTSTRAP_SERVER "${region}-producer.cls.tencentyun.com:${port}"// USERNAME is the logset ID.#define USERNAME "${logsetID}"// PASSWORD format: ${secret_id}#${secret_key}. For anonymous upload, format: topic_id#${log topic ID}#define PASSWORD "${secret_id}#${secret_key}"// log topic ID#define TOPIC "${topicID}"#define ACKS "${acks}"// Configuration Compression Mode#define COMPRESS_TYPE "${compress_type}"static void dr_msg_cb(rd_kafka_t *rk, const rd_kafka_message_t *rkmessage, void *opaque) {if (rkmessage->err) {fprintf(stdout, "%% Message delivery failed : %s\n", rd_kafka_err2str(rkmessage->err));} else {fprintf(stdout, "%% Message delivery successful %zu:%d\n", rkmessage->len, rkmessage->partition);}}int main(int argc, char **argv) {// 1. Initialize the configuration.rd_kafka_conf_t *conf = rd_kafka_conf_new();rd_kafka_conf_set_dr_msg_cb(conf, dr_msg_cb);char errstr[512];if (rd_kafka_conf_set(conf, "bootstrap.servers", BOOTSTRAP_SERVER, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}if (rd_kafka_conf_set(conf, "acks", ACKS, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}if (rd_kafka_conf_set(conf, "compression.codec", COMPRESS_TYPE, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}// Set the authentication method.if (rd_kafka_conf_set(conf, "security.protocol", "sasl_plaintext", errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}if (rd_kafka_conf_set(conf, "sasl.mechanisms", "PLAIN", errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}if (rd_kafka_conf_set(conf, "sasl.username", USERNAME, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}if (rd_kafka_conf_set(conf, "sasl.password", PASSWORD, errstr, sizeof(errstr)) != RD_KAFKA_CONF_OK) {rd_kafka_conf_destroy(conf);fprintf(stdout, "%s\n", errstr);return -1;}// 2. Create handler.rd_kafka_t *rk = rd_kafka_new(RD_KAFKA_PRODUCER, conf, errstr, sizeof(errstr));if (!rk) {rd_kafka_conf_destroy(conf);fprintf(stdout, "create produce handler failed: %s\n", errstr);return -1;}// 3. Send data.std::string value = "test lib kafka ---- ";for (int i = 0; i < 100; ++i) {retry:rd_kafka_resp_err_t err = rd_kafka_producev(rk, RD_KAFKA_V_TOPIC(TOPIC),RD_KAFKA_V_MSGFLAGS(RD_KAFKA_MSG_F_COPY),RD_KAFKA_V_VALUE((void *) value.c_str(), value.size()),RD_KAFKA_V_OPAQUE(nullptr), RD_KAFKA_V_END);if (err) {fprintf(stdout, "Failed to produce to topic : %s, error : %s", TOPIC, rd_kafka_err2str(err));if (err == RD_KAFKA_RESP_ERR__QUEUE_FULL) {rd_kafka_poll(rk, 1000);goto retry;}} else {fprintf(stdout, "send message to topic successful : %s\n", TOPIC);}rd_kafka_poll(rk, 0);}std::cout << "message flush final" << std::endl;rd_kafka_flush(rk, 10 * 1000);if (rd_kafka_outq_len(rk) > 0) {fprintf(stdout, "%d message were not deliverer\n", rd_kafka_outq_len(rk));}rd_kafka_destroy(rk);return 0;}
/** The demo only provides the simplest method of use, specific production can be achieved by combine calls* During use, the todo items in the Demo need to be replaced** Notes:* 1. The Demo is verified based on Confluent.Kafka/1.8.2* 2. MessageMaxBytes must not exceed 5M* 3. The Demo uses synchronous production and can be changed to asynchronous based on business scenario when using* 4. Other parameters can be adjusted themselves according to the business reference document during use: https://docs.confluent.io/platform/current/clients/confluent-kafka-dotnet/_site/api/Confluent.Kafka.ProducerConfig.html** Confluent.Kafka reference document: https://docs.confluent.io/platform/current/clients/confluent-kafka-dotnet/_site/api/Confluent.Kafka.html*/using Confluent.Kafka;namespace Producer{class Producer{private static void Main(string[] args){var config = new ProducerConfig{// TODO domain name, refer to https://cloud.tencent.com/document/product/614/18940.// Fill in Kafka. Pay attention to the private network port 9095 and public network port 9096.BootstrapServers = "${domain}:${port}",SaslMechanism = SaslMechanism.Plain,// TODO logset ID of the topicSaslUsername = "${logsetID}",// TODO key for the uin the topic belongs to, format: ${secret_id}#${secret_key}// For anonymous upload, format is topic_id#${log topic ID}SaslPassword = "${secret_id}#${secret_key}",SecurityProtocol = SecurityProtocol.SaslPlaintext,// TODO assign according to the actual use scene. Available values: Acks.None, Acks.Leader, and Acks.AllAcks = Acks.None,// The maximum size of TODO request message, not more than 5M.MessageMaxBytes = 5242880};// deliveryHandlerAction<DeliveryReport<Null, string>> handler =r => Console.WriteLine(!r.Error.IsError ? $"Delivered message to {r.TopicPartitionOffset}" : $"Delivery Error: {r.Error.Reason}");using (var produce = new ProducerBuilder<Null, string>(config).Build()){try{// TODO Test Verification Codefor (var i = 0; i < 100; i++){// TODO replace log topic IDproduce.Produce("${topicID}", new Message<Null, string> { Value = "C# demo value" }, handler);}produce.Flush(TimeSpan.FromSeconds(10));}catch (ProduceException<Null, string> pe){Console.WriteLine($"send message receiver error : {pe.Error.Reason}");}}}}}
Last updated:2025-11-17 09:35:25


Parameter | Required | Description |
CKafka instance | Yes | Select the target CKafka instance. |
Kafka topics | Yes | Select one or more Kafka topics. |
Consumer group | No | When left empty, a consumer group will be automatically created using the naming convention cls-${taskid}. If specified, the designated consumer group will be used for consumption. Note: 1. If left empty, ensure the Kafka cluster has permissions to auto-create consumer groups. 2. If specified, verify the designated consumer group is not actively used by other tasks to prevent data loss. |
Start position | Yes | Earliest: Start consuming from the earliest offset. Latest: Start consuming from the latest offset. Note: The starting position can only be configured when the subscription task is created and the position cannot be modified afterward. |
Parameter | Required | Description |
Access mode | Yes | You can choose to access your self-built Kafka cluster via Private network or public network access. |
Network service type | Yes | If the access method is via Private network, you need to specify the network service type of the target self-built Kafka cluster. CVM CLB Cloud Connect Network (CCN) (currently in beta, submit a ticket if you need to use it). Direct connect gateway (currently in beta, submit a ticket if you need to use it). Note: For the differences and usage of different network service types, see Self-built Kafka Private Network Access Configuration Instructions. |
Network(VPC) | Yes | When the network service type is selected as CVM or CLB, you need to select the VPC instance where the CVM or CLB is located. |
Service Address | Yes | Enter the public IP address or domain name of the target Kafka. Note: If the Kafka protocol is used to consume logs from other log topics across regions/accounts, use the target log topic's Cross-Account Log Sync via Kafka Data Subscription. |
Private Domain Resolution | No | When Kafka brokers deployed on CVM communicate using internal domain names, you need to specify the CVM domain name and IP address for each broker here. For detailed configuration scenarios, see Configuration Instructions for Self-built Kafka Private Network Access. |
Authentication | Yes | Whether authentication is required to access the target Kafka cluster. |
Protocol | Yes | If the target Kafka cluster requires authentication to access, you need to select the authentication protocol type: plaintext sasl_plaintext sasl_ssl ssl |
Authentication mechanism | Yes | If the target Kafka cluster requires authentication to access, and the protocol type is sasl_plaintext or sasl_ssl, you need to select the authentication mechanism: PLAIN SCRAM-SHA-256 SCRAM-SHA-512 |
Username/Password | Yes | If the target Kafka cluster requires authentication to access, and the protocol type is sasl_plaintext or sasl_ssl, you need to enter the username and password required to access the target Kafka cluster. |
Client SSL Authentication | Yes | If the access protocol type for the target Kafka cluster is sasl_ssl or ssl, and client CA certificates are required for access, you need to enable this configuration and choose an existing certificate or go to SSL Certificate Service to upload the CA certificate. |
Server SSL Authentication | Yes | If the access protocol type for the target Kafka cluster is sasl_ssl or ssl, and server certificates are required for access, you need to enable this configuration and choose an existing certificate or go to SSL Certificate Service to upload the server certificate. |
Kafka topics | Yes | Enter one or more Kafka topics. Separate multiple topics with commas. |
Consumer group | No | If it is left empty, a consumer group will be automatically created with the naming convention cls-${taskid}. If it is specified, the designated consumer group will be used for consumption. Notes: If it is left empty, ensure that the Kafka cluster can automatically create a consumer group. If it is specified, ensure that the designated consumer group is not being used by other tasks, as this may cause data loss. |
Start position | Yes | Earliest: Start consuming from the earliest offset. Latest: Start consuming from the latest offset. Note: The starting position can only be configured when the subscription task is created and the position cannot be modified afterward. |
Parameter | Required | Description |
Configuration Name | Yes | The name of the Kafka data subscription configuration. |
Data extraction mode | Yes | You can choose from three extraction modes: JSON, Single-line full-text log, and Single-line full regular expression. For more details, see Data Extraction Mode. |
Log Sample | Yes | If the data extraction mode is set to single-line full regular expression, you need to manually enter or automatically obtain a log sample to validate the regular expression and extract key-value pairs. |
Regular Expression | Yes | If the data extraction mode is set to single-line full regular expression, you need to manually enter or automatically generate a regular expression. The system will validate and extract key-value pairs based on the regular expression you provide. For detailed instructions on how to automatically generate a regular expression, see Automatically Generating Regular Expressions. |
Log Extraction Result | Yes | If the data extraction mode is set to single-line full regular expression, you need to configure or modify the field names extracted based on the regular expression. |
Manual Verification | No | If the data extraction mode is set to single-line full regular expression, you can optionally provide one or more log samples to validate the correctness of the regular expression. |
Upload Parsing-Failed Logs | Yes | If the data extraction mode is set to JSON or single-line full regular expression, and if uploading parsing-failed logs is enabled, LogListener will upload the logs where parsing fails. If it is disabled, the failed logs will be discarded. |
Key Name of Parsing-Failed Logs | Yes | If uploading parsing-failed logs is enabled, you can specify a field name as the Key, and the logs that fail to be parsed will be uploaded as the Value of the specified field. |
Encoding format | Yes | Based on your logs, you can choose from the following two encoding formats: UTF-8 GBK |
Use default time | Yes | When it is enabled, the system will use the current system time or the Kafka message timestamp as the log timestamp. When it is disabled, the timestamp from the log's time field will be used. |
Default Time Source | Yes | When Use default time is enabled, you can choose from the following two default events as the log timestamp: Current system time Kafka message timestamp |
Time field | Yes | When Use default time is disabled, and the data extraction mode is JSON or regex, you can specify the field name in the log that represents the time. The value of this field will be used as the log's timestamp. |
Time extraction regex | Yes | When Use default time is disabled, and the data extraction mode is single-line full-text, you can define the field that represents the time in the log using a regular expression. Note: If the regular expression matches multiple fields, the first one will be used. Example: If the original log is message with time 2022-08-08 14:20:20, you can set the time extraction regex as \d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d |
Time field format | Yes | When Use default time is disabled and the time field in the log is confirmed, you need to further specify the time format to parse the value of the time field. For more details, see Configure Time Format. |
Time zone of the time field | Yes | When Use default time is disabled and the time field and format in the log are confirmed, you need to choose between the following two time zone standards: UTC (Coordinated Universal Time) GMT (Greenwich Mean Time) |
Time used when the parsing failed | Yes | When Use default time is disabled, if the time extraction regex or time field format parsing fails, users can choose between the following two default times as the log timestamp: Current system time Kafka message timestamp |
Filter | No | The purpose of the filter is to add log collection filtering rules based on business needs, helping you filter out valuable log data. The following filtering rules are supported: Equal to: Only collect logs with specified field values matching the specified characters. Exact or regular matching is supported. Not equal to: Only collect logs whose specified field values do not match the specified characters. Exact or regular matching is supported. Field exists: Only logs where the specified field exists are collected. Field does not exist: Only logs in which the specified field does not exist are collected. For example, if you want all log data with response_code of 400 or 500 in the original JSON format log content to be collected, then configure response_code at key, select equals as filtering rule, and configure 400|500 at value. Note: The relationship between multiple filter conditions is an and logic. If multiple filter conditions are configured for the same key name, the rules will be overwritten. |
Kafka metadata | No | The following 4 types of Kafka-related metadata are supported for selection to be uploaded along with the logs: kafka_topic kafka_partition kafka_offset kafka_timestamp Note: If there are fields in the original log with the same name as the above metadata, they will be overwritten. |

error.Configuration Item | Feature Description |
Full-Text Delimiter | A set of characters that split the field value into segments. Only English symbols are supported. The default separator on the console is @&? |#()='",;:<>[]{}/ \n\t\r\\. |
Case sensitive | Whether it is case-sensitive during retrieval. For example, if the log is Error and case-sensitive, it cannot be retrieved with error. |
Allow Chinese Characters | Enable this feature when the log includes Chinese and needs to be retrieved. For example, if the log is "User log-in API timeout", without enabling this feature, the log cannot be retrieved by searching "Timeout". The log can only be retrieved by completely searching "User log-in API timeout". After this feature is enabled, the log can be retrieved by searching "Timeout". |
level:error AND timeCost:>1000. Some logs also contain a special type of metadata field, and the index configuration for these fields is the same as for regular fields.Configuration Item | Feature Description |
Field Name | The field name. A single log topic key-value index can have up to 300 fields. Only letters, digits, underscores, and -./@ are supported, and the field name cannot start with an underscore. |
Field Type | The data types of the field include text, long and double. The text type supports fuzzy retrieval using wildcards and does not support range comparison. The long and double types support range comparison, but do not support fuzzy retrieval. |
Delimiter | Character set for word segmentation of field values. Only English symbols are supported. The default word separator on the console is @&? |#()='",;:<>[]{}/ \n\t\r\\. |
Chinese Characters | This feature can be enabled when the field includes Chinese and you need to retrieve it. For example, the log is "message: User log-in API timeout", if the feature is not enabled, use message: "Timeout" could not retrieve the log, only using message: "User log-in API timeout" can retrieve the log. After enabling this feature, you can use message: "Timeout" to retrieve the log. |
Statistics | If this parameter is enabled, you can use SQL to analyze this field. When the text type field is enabled for statistics, if the value is too long, only the first 32766 characters are involved in statistical calculations. Enabling statistics will not incur additional fees. It is recommended that you enable it. |
Case Sensitivity | Specifies whether the retrieval is case-sensitive. For example, if the log is level:Error and case sensitivity is enabled, retrieving with level:error will not work. |





{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
(\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
listener.security.protocol.map=CVM:PLAINTEXTlisteners=CVM://10.0.0.2:9092advertised.listeners=CVM://10.0.0.2:9092

listener.security.protocol.map=CLB:PLAINTEXTlisteners=CLB://10.0.0.2:29092advertised.listeners=CLB://10.0.0.12:29092
listener.security.protocol.map=DOMAIN:PLAINTEXTlisteners=DOMAIN://10.0.0.2:9092advertised.listeners=DOMAIN://broker1.cls.tencent.com:9092
Last updated:2024-01-20 17:14:28
curl --request GET 'http://{host}/track?topic_id={topic_id}&key1=val1&key2=val2'
Parameter | Required | Description |
${host} | Yes | |
${topic_id} | Yes | topic id |
key1=val1&key2=val2 | Yes | The key-value pairs you want to upload to CLS. Ensure that the data is less than 16 KB. |
track.gif file contains the custom parameters that you want to upload. If you use this method, CLS records the custom parameters as well as the User-Agent HTTP header as log fields.<img src='http://${host}/track.gif?topic_id={topic_id}&key1=val1&key2=val2'/>
POST http://${host}/tracklog?topic_id=${topic_id} HTTP/1.1
Parameter | Required | Note |
${host} | Yes | |
${topic_id} | Yes | topic id |
POST /tracklog?topic_id={topic_id} HTTP/1.1Host:ap-guangzhou.cls.tencentcs.comContent-Type:application/json{"logs": [{"contents": {"key1": "value1","key2": "value2"},"time": 123456789}],"source": "127.0.0.1"}
Last updated:2024-01-20 17:14:28
addError.<dependency><groupId>com.tencentcloudapi.cls</groupId><artifactId>tencentcloud-cls-logback-appender</artifactId><version>1.0.3</version></dependency>
<appender name="LoghubAppender" class="com.tencentcloudapi.cls.LoghubAppender"><!--Required--><!--Domain Configuration -- Refer to https://intl.cloud.tencent.com/document/product/614/18940?lang=en&pg=#domain-name for detailed information.><endpoint><region>.cls.tencentcs.com</endpoint><accessKeyId>${SecretID}</accessKeyId><accessKeySecret>${SecretKey}</accessKeySecret><!--Log Topic ID--><topicId>${topicId}</topicId><!-- Optional. For details, see 'Parameter description'--><totalSizeInBytes>104857600</totalSizeInBytes><maxBlockMs>0</maxBlockMs><sendThreadCount>8</sendThreadCount><batchSizeThresholdInBytes>524288</batchSizeThresholdInBytes><batchCountThreshold>4096</batchCountThreshold><lingerMs>2000</lingerMs><retries>10</retries><baseRetryBackoffMs>100</baseRetryBackoffMs><maxRetryBackoffMs>50000</maxRetryBackoffMs><!-- Optional. Set the time format --><timeFormat>yyyy-MM-dd'T'HH:mm:ssZ</timeFormat><timeZone>Asia/Shanghai</timeZone><encoder><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg</pattern></encoder><mdcFields>THREAD_ID,MDC_KEY</mdcFields></appender>
Parameter | Description | Example |
totalSizeInBytes | Maximum size of cached logs in a single producer instance. The default value is 100 MB. | totalSizeInBytes=104857600 |
maxBlockMs | If the available space for the producer is insufficient, the maximum blockage time in the send method defaults to 60 seconds. To prevent the obstruction of the log printing thread, it is strongly recommended to set this value to 0. | maxBlockMs=0 |
sendThreadCount | Size of the thread pool for executing log transmission tasks. The default value is the number of available processors. | sendThreadCount=8 |
batchSizeThresholdInBytes | When the size of the cached logs in a ProducerBatch is greater than or equal to the value of the batchSizeThresholdInBytes, the batch will be dispatched. The default value is 512 KB. The maximum value is 5 MB. | batchSizeThresholdInBytes=524288 |
batchCountThreshold | When the number of cached logs in a ProducerBatch is greater than or equal to the value of the batchCountThreshold, the batch will be dispatched. The default value is 4096.The maximum value is 40960. | batchCountThreshold=4096 |
lingerMs | Linger time of a ProducerBatch from creation to dispatch. The default value is 2 seconds. The minimum value is 100 milliseconds. | lingerMs=2000 |
retries | In the event of an initial transmisson failure of a particular ProducerBatch, the default value of the retries is 10. If the value of the retries is less than or equal to 0, the ProducerBatch will directly enter the failure queue following its initial unsuccessful transmission. | retries=10 |
maxReservedAttempts | You will trace back more information when the value of this parameter becomes larger. However, this will also consume more memory. | maxReservedAttempts=11 |
baseRetryBackoffMs | Initial backoff time for the first retry. The default value is 100 milliseconds. The Producer uses an exponential backoff algorithm, where the scheduled waiting time for the Nth retries is calculated as baseRetryBackoffMs * 2^(N-1). | baseRetryBackoffMs=100 |
maxRetryBackoffMs | Maximum backoff time for retries. The default value is 50 seconds. | maxRetryBackoffMs=50000 |
timeFormat | This parameter is used to set the time format. | Accurate to the second: yyyy-MM-dd'T'HH:mm:ssZ
Accurate to the millisecond: yyyy-MM-dd'T'HH:mm:ss.SSSZ |
Last updated:2024-01-20 17:14:28
org.apache.log4j.helpers.LogLog and, by default, will be output to the console.
Field | Description |
__SOURCE__ | Source IP |
__FILENAME__ | File name |
level | Log level |
location | Code location of the log print statement |
message | Log content |
throwable | Log exception information (This field exists only when exception information is logged.) |
thread | Thread name |
time | Log print time (You can print the format and time zone via timeFormat and timeZone respectively.) |
log | Custom log format |
<dependency><groupId>com.tencentcloudapi.cls</groupId><artifactId>tencentcloud-cls-log4j-appender</artifactId><version>1.0.2</version></dependency>
#loghubAppenderlog4j.appender.loghubAppender=com.tencentcloudapi.cls.LoghubAppender# CLS HTTP address. Required.log4j.appender.loghubAppender.endpoint=ap-guangzhou.cls.tencentcs.com# User ID. Required.log4j.appender.loghubAppender.accessKeyId=log4j.appender.loghubAppender.accessKeySecret=# `log` field format. Required.log4j.appender.loghubAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.loghubAppender.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n# Log topic. Required.log4j.appender.loghubAppender.topicID =# Log source. Optional.log4j.appender.loghubAppender.source =# Maximum size of logs cached by a single Producer instance. The default value is 100 MB.log4j.appender.loghubAppender.totalSizeInBytes=104857600# Maximum time for blocking a caller from using the `send` method if the Producer has insufficient free space. The default value is 60 seconds. It is strongly recommended that this value be set to 0 in order not to block the log print thread.log4j.appender.loghubAppender.maxBlockMs=0# Size of the thread pool for executing log sending tasks. The default value is the number of available processors.log4j.appender.loghubAppender.sendThreadCount=8# When the size of logs cached in ProducerBatch is greater than or equal to `batchSizeThresholdInBytes`, the batch will be sent. The default value is 512 KB, and the maximum value can be set to 5 MB.log4j.appender.loghubAppender.batchSizeThresholdInBytes=524288# When the number of logs cached in ProducerBatch is greater than or equal to `batchCountThreshold`, the batch will be sent. The default value is 4096, and the maximum value allowed is 40960.log4j.appender.loghubAppender.batchCountThreshold=4096# Linger time of a ProducerBatch from creation to sending. The default value is 2 seconds, and the minimum value allowed is 100 milliseconds.log4j.appender.loghubAppender.lingerMs=2000# Number of retries that a ProducerBatch can be retries if it fails to be sent for the first time. The default value is 10 retries.# If `retries` is less than or equal to 0, the ProducerBatch directly enters the failure queue when it fails to be sent for the first time.log4j.appender.loghubAppender.retries=10# A larger parameter value allows you to trace more information, but it also consumes more memory.log4j.appender.loghubAppender.maxReservedAttempts=11# Backoff time for the first retry. The default value is 100 milliseconds.# The Producer adopts an exponential backoff algorithm. The scheduled wait time for the Nth retry is baseRetryBackoffMs * 2^(N-1).log4j.appender.loghubAppender.baseRetryBackoffMs=100# Maximum backoff time for retries. The default value is 50 seconds.log4j.appender.loghubAppender.maxRetryBackoffMs=50000# Time format. Optional.log4j.appender.loghubAppender.timeFormat=yyyy-MM-dd'T'HH:mm:ssZ# Set the time zone to the UTC+08:00 time zone. Optional.log4j.appender.loghubAppender.timeZone=Asia/Shanghai# Output DEBUG and higher level messageslog4j.appender.loghubAppender.Threshold=DEBUG
Last updated:2025-11-07 17:41:10
SDK Language | Code Repository | Log Upload Practice |
Python | ||
Java | ||
C++ | ||
C | ||
Go | ||
NodeJS | ||
HarmonyOS NEXT | ||
Android | ||
iOS | ||
PHP | ||
Flutter (Rust) | ||
Flutter (Dart) | - | |
Browser JavaScript | ||
Mini Program JavaScript | ||
.NET sdk | - |
Last updated:2025-11-19 19:25:38
POST /structuredlog?topic_id=xxxxxxxx-xxxx-xxxx-xxxx HTTP/1.1Host: <Region>.cls.tencentyun.comAuthorization: <AuthorizationString>Content-Type: application/x-protobuf<`LogGroupList` content packaged as a PB file>
${region}.cls.tencentyun.com, which is only valid for access requests from the same region, that is, CVM or Tencent Cloud services access the CLS service in the same region through the private domain name.${region}.cls.tencentcs.com. After the access source is connected to the internet, the public domain name of CLS can be accessed under normal circumstances.region field is the abbreviation of a CLS service region, such as ap-beijing for the Beijing region. For the complete region list, see Available Regions.ap-beijing - Beijingap-shanghai - Shanghaiap-guangzhou - Guangzhouap-chengdu - Chengdu...
hashkey, strictly guaranteeing the sequence of the data written to and consumed in this partition.POST /structuredlog?topic_id=xxxxxxxx-xxxx-xxxx-xxxx HTTP/1.1Host: <Region>.cls.tencentyun.comAuthorization: <AuthorizationString>Content-Type: application/x-protobufx-cls-hashkey: xxxxxxxxxxxxxxxxxxxxxxxx<`LogGroupList` content packaged as a PB file>
POST /structuredlog?topic_id=xxxxxxxx-xxxx-xxxx-xxxx HTTP/1.1Host: <Region>.cls.tencentyun.comAuthorization: <AuthorizationString>Content-Type: application/x-protobufx-cls-compress-type:lz4<`LogGroupList` content packaged as a PB file>
POST /structuredlog?topic_id=xxxxxxxx-xxxx-xxxx-xxxx HTTP/1.1Host: <Region>.cls.tencentyun.comAuthorization: <AuthorizationString>Content-Type: application/x-protobuf<`LogGroupList` content packaged as a PB file>
POST /structuredlog
x-cls-hashkey request header indicates that logs are written to the CLS topic partitions with a range corresponding to the hashkey route, strictly guaranteeing the write sequence of logs to each topic partition for sequential consumption.Field Name | Type | Location | Required | Description |
x-cls-hashkey | string | header | No | Specifies the topic partition to which the logs will be written based on hashkey |
Field Name | Type | Location | Required | Description |
topic_id | string | query | Yes | ID of the target log topic to which data will be uploaded, which can be viewed on the log topic page |
logGroupList | message | pb | Yes | The logGroup list, which describes the encapsulated log groups. No more than five logGroup values are recommended. |
Field Name | Required | Description |
logs | Yes | Log array, which is a set consisting of multiple Log values. A Log indicates a log, and LogGroup can contain up to 10,000 Log values |
contextFlow | No | UID used to maintain context, which does not take effect currently |
filename | No | Log filename |
source | No | Log source, which is generally the server IP |
logTags | No | Tag list of the log |
Field Name | Required | Description |
time | Yes | UNIX timestamp of log time in seconds or milliseconds (recommended) |
contents | No | Log content in key-value format. A log can contain multiple key-value pairs. |
Field Name | Required | Description |
key | Yes | Key of a field group in one log, which cannot start with _. |
value | Yes | Value of a field group, which cannot exceed 1 MB in one log. The total value cannot exceed 5 MB in LogGroup. |
LogTag description:Field Name | Required | Description |
key | Yes | Key of a custom tag |
value | Yes | Value corresponding to the custom tag key |
HTTP/1.1 200 OKContent-Length: 0
protobuf-2.6.1.tar.gz package to /usr/local and access this directory:[root@VM_0_8_centos]# tar -zxvf protobuf-2.6.1.tar.gz -C /usr/local/ && cd /usr/local/protobuf-2.6.1
[root@VM_0_8_centos protobuf-2.6.1]# ./configure[root@VM_0_8_centos protobuf-2.6.1]# make && make install[root@VM_0_8_centos protobuf-2.6.1]# export PATH=$PATH:/usr/local/protobuf-2.6.1/bin
[root@VM_0_8_centos protobuf-2.6.1]# protoc --versionliprotoc 2.6.1
cls.proto based on the PB data format content specified by CLS..proto.cls.proto (PB description file) is as follows:package cls;message Log{message Content{required string key = 1; // Key of each field grouprequired string value = 2; // Value of each field group}required int64 time = 1; // Unix timestamprepeated Content contents = 2; // Multiple `key-value` pairs in one log}message LogTag{required string key = 1;required string value = 2;}message LogGroup{repeated Log logs = 1; // Log array consisting of multiple logsoptional string contextFlow = 2; // This parameter does not take effect currentlyoptional string filename = 3; // Log filenameoptional string source = 4; // Log source, which is generally the server IPrepeated LogTag logTags = 5;}message LogGroupList{repeated LogGroup logGroupList = 1; // Log group list}
cls.proto file. Run the following compilation commands:protoc --cpp_out=./ ./cls.proto
--cpp_out=./ indicates that the file will be compiled in cpp format and output to the current directory. ./cls.proto indicates the cls.proto description file in the current directory.cls.pb.h header file and cls.pb.cc code implementation file.[root@VM_0_8_centos protobuf-2.6.1]# protoc --cpp_out=./ ./cls.proto[root@VM_0_8_centos protobuf-2.6.1]# lscls.pb.cc cls.pb.h cls.proto
cls.pb.h header file into the code and call the API for data format encapsulation.Last updated:2025-12-03 11:22:42

Field | Description | Example |
access | Network used in this probe. | Wi-Fi |
access_subtype | Another network used in this probe. This parameter is available when multiple networks are connected simultaneously. | ● Wi-Fi ● Android: 3G/4G/5G ● iOS: Cellula |
app_version | Application version number. | 1.0.0 |
device_model | Device mode. | - |
me | Mobile user identifier. | - |
resolution | Screen resolution. | 2476*1440 |
local_time | Local time. | 2023-02-01 20:58:00:332 |
root | Whether it is a root user. | false |
app_id | Application package name. | - |
brand | Device vendor information. | google |
os | Operating system. | Android |
utdid | Device identifier. | - |
os_version | Operating system version. | 13 |
reserve6 | Specific probed content. | - |
reserves | Probing protocol. | ● ping ● tcpping ● Traceroute |
app_name | Application name. | test |
imei | Mobile device identifier. | - |
local_timestamp | Local timestamp. | 1675256280332 |
Field | Description | Example |
method | Probing method. | ping |
host_ip | IP address resolved from the domain name. | 192.0.65.112 |
host | Domain name. | www.tencentcloud.com |
max | Maximum latency. Unit: ms. | 100.11 |
min | Minimum latency. Unit: ms. | 0.00 |
avg | Average latency. Unit: ms. | 74.51 |
stddev | Standard deviation of the latency. | 20.00 |
loss | Number of lost PING packets. | 1 |
count | Number of probes. One PING packet is sent each time. | 10 |
size | Number of bytes in the PING packet. | 64 |
responseNum | Number of PING packets returned. | 9 |
interval | PING packet interval. Unit: ms. | 200 |
timestamp | Local timestamp. | 1675256419 |
Field | Description | Example |
method | Probing method. | TCPPING |
host_ip | IP address resolved from the domain name. | 192.0.65.112 |
host | Domain name. | www.tencentcloud.com |
max | Maximum latency. Unit: ms. | 100.11 |
min | Minimum latency. Unit: ms. | 0.00 |
avg | Average latency. Unit: ms. | 74.51 |
stddev | Standard deviation of the latency. | 20.00 |
loss | Number of lost PING packets. | 1 |
count | Number of probes. One PING packet is sent each time. | 10 |
size | Number of bytes in the PING packet. | 64 |
sum | Total probing time. Unit: ms. | 219.66 |
port | TCP port. | 88 |
timestamp | Local timestamp. | 1675256419 |
Field | Description | Example |
method | Probing method. | TRACEROUTE |
host_ip | IP address resolved from the domain name. | 192.0.65.112 |
host | Domain name. | www.tencentcloud.com |
command_status | Probing request status. | success |
timestamp | Local timestamp. | 1675256419 |
traceroute_node_results | Results returned from the TRACEROUTE probing node. | See the detailed field description below for the list content. |
Field | Description | Example |
targetIp | IP address of a certain hop. | 43.152.65.112 |
hop | Number of a certain hop. The hop number starts at 0 for the source. The hop number increases as the packet moves closer to the destination. | 1 |
avg_delay | Average latency. | 102 |
loss | Number of lost probe packets. | 33 |
is_final_route | Whether it is the final path. | true |
single_node_list | Returned result of a specific node. | See the following detailed fields for the list. |
Field | Description | Example |
targetIp | IP address of a certain hop. | 43.152.65.112 |
hop | Number of a certain hop. The hop number starts at 0 for the source. The hop number increases as the packet moves closer to the destination. | 1 |
delay | Probing latency. | 102 |
is_final_route | Whether it is the final path. | true |
status | Current probing request status. | CMD_STATUS_FAILED/CMD_STATUS_SUCCESSFUL |
Last updated:2025-12-03 11:22:42
build the CLS extension FluentBit.go build -buildmode=c-shared -o fluent-bit-go.so
run the CLS extension FluentBit.fluent-bit -c example/fluent.conf -e fluent-bit-go.so
example/fluent.conf of FluentBit.[OUTPUT]Name fluent-bit-go-clsMatch *# TODO: Configure the following parameters:TopicID YOUR_TOPIC_IDCLSEndPoint YOUR_ENDPOINTAccessKeyID YOUR_PROJECT_SKAccessKeySecret YOUR_PROJECT_AK
Parameter Name | Description |
TopicID | ID of the log topic to which the log will be uploaded. |
CLSEndPoint | Example: Private network domain name of Guangzhou: ap-guangzhou.cls.tencentyun.com Public network domain name of Guangzhou: ap-guangzhou.cls.tencentcs.com |
AccessKeyID | |
AccessKeySecret | Part of the cloud API key. SecretKey is the key used to encrypt the signature string and verify the signature string on the server side. |
Last updated:2025-12-03 11:22:42
logstash-plugin install logstash-output-cls
logstash -f logstash-sample.conf
example/fluent.conf.input {}## If you need to specify a time field in the log as the timestamp, you can configure the following time parsing settings:filter {date {match => ["produce_log_time","yyyy-MM-dd HH:mm:ss.SSS"]target => "@timestamp"}}output {cls{endpoint => "[CLS data access domain name]"topic_id => "[Log topic ID]"access_key_id => ""access_key_secret => ""}}
Parameter Name | Type | Required | Default Value | Description |
endpoint | string | Yes | - | Example: Private network domain name of Guangzhou: ap-guangzhou.cls.tencentyun.com Public network domain name of Guangzhou: ap-guangzhou.cls.tencentcs.com |
topic_id | string | Yes | - | ID of the log topic to which the log will be uploaded. |
source | string | No | IP address of the local NIC | IP address of the source from which the log originates. |
access_key_id | string | Yes | - | |
access_key_secret | string | Yes | - | Part of the cloud API key. SecretKey is the key used to encrypt the signature string and verify the signature string on the server side. |
max_buffer_items | int | No | 4000 | Logs are uploaded by batch. This parameter controls the maximum number of log entries that can be included in one batch. |
max_buffer_bytes | int | No | 2097152 | Logs are uploaded by batch. This parameter controls the maximum total size of each batch in bytes. |
max_buffer_seconds | int | No | 3 | Logs are uploaded by batch. This parameter controls the maximum dwell time from creation to sending for each batch. |
total_size_in_bytes | int | No | 104857600 | Maximum size of logs that the instance can cache, in bytes. |
max_send_retry | int | No | 10 | Maximum number of retries when log upload fails. |
send_retry_interval | int | No | 200 | Time interval between retries when log upload fails. |
to_json | boolean | No | true | Whether to perform JSON parsing on collected logs. |
time_key | string | No | @timestamp | Key name of the source field for log timestamp. |
Last updated:2024-01-20 17:14:28


Configuration Item | Description | Rule | Required |
Task Name | Set the name of the import task. | The value can contain letters, numbers, underscores (_), and hyphens (-). | Yes |
Bucket Region | Set the region of the bucket where the file to be imported resides. If the file to be imported and the destination log topic are in different regions, public network fees will incur due to cross-region access. | Select an option from the list. | Yes |
Bucket | Select the bucket where the file to be imported resides. The drop-down list box provides all buckets in the selected region for you to choose. | Select an option from the list. | Yes |
File Prefix | Enter the prefix of the folder where the COS file to be imported resides for accurate locating. You can enter the file prefix csv/ or the complete file path csv/object.gz. | Enter a value. | Yes |
Compression Mode | Select the compression mode of the COS file to be imported. CLS decompresses the file and reads data according to the compression mode of the file. Supported compression modes are: GZIP, LZOP, SNAPPY, and no compression. | Select an option from the list. | Yes |
key as status and the filter rule as 400|500.:::, it can also be parsed through custom delimiter.@&()='",;:<>[]{}/ \n\t\r and can be modified as needed.

Last updated:2024-01-20 17:14:28
Service | Collection Configuration Directions | Log Analysis |
CLB | ||
CDN | ||
ECDN | ||
EdgeOne | - | |
CVM | Install and configure LogListener. For more information, see Deploying LogListener on CVMs in Batches. | |
TKE | Configure log collection in the TKE console. For more information, see Collect container logs to CLS. | |
SCF | Configure log collection in the SCF console. For more information, see Log Delivery Configuration (Legacy). | - |
CloudAudit | Configure log collection in the CloudAudit console. For more information, see Shipping Log with Tracking Set. | - |
COS | Configure log collection in the COS console. For more information, see Enabling Real-Time Log Feature on COS. | |
Flow Logs | ||
TI-ONE | Configure log collection in the TI-ONE console. | - |
WAF | Configure log collection in the WAF console. | - |
CKafka | Configure log collection in the CKafka console. | - |
IoT Hub | - |
Last updated:2025-12-03 11:22:42

/usr/local/ as an example. In the /usr/local/loglistener/tools path, run the LogListener initialization command with root permissions. The initialization command is as follows:./loglistener.sh init -secretid AKID******************************** -secretkey whHwQfjdLnzzCE1jIf09xxxxxxxxxxxx -domain asccelerate-xxxxx-ap-xxxxx -IP xxx.xxx.xxx.xxx
Parameter Name | Required | Type Description |
secretid | Yes | |
secretkey | Yes | Part of the cloud API key. SecretKey is the key used to encrypt the signature string and verify the signature string on the server side. |
domain | Yes | Specify the domain name for log upload acceleration. |
IP | No | IP address of the machine. If this parameter is not specified, LogListener will automatically obtain the IP address of the local machine. |
label | No | Machine label. If this parameter is specified, the machine will be associated with a group of machines that share the same label. Separate multiple labels with commas. If a machine label is configured, the machine will only be associated with a machine group through the label, not through its IP address. If this parameter is not configured, the machine group can only be associated with the machine through its IP address. |
/usr/local/ as an example. In the /usr/local/loglistener/etc path, run the following command to open the LogListener configuration file loglistener.conf.vim loglistener.conf

systemctl restart loglistenerd
Last updated:2025-12-03 11:22:43




Parameter | Required | Description |
Public Network Access Address | Yes | The service address of the Elasticsearch cluster. Specify an IP address or a domain name. |
ES Port | Yes | The access port of the Elasticsearch cluster. Generally, the port is 9200. |
Username | No | Elasticsearch username. This setting is required only if user authentication is enabled for the Elasticsearch cluster. |
Password | No | Elasticsearch user password. This setting is required only if user authentication is enabled for the Elasticsearch cluster. |
Parameter | Required | Description |
Network service type | Yes | If the access method is via the private network address, you need to specify the network service type of the target Elasticsearch cluster. CVM CLB |
Network(VPC) | Yes | When the network service type is selected as CVM or CLB, you need to select the VPC instance where the CVM or CLB instance is located. |
Private Network Access Address | Yes | The service address of the Elasticsearch cluster. Specify an IP address or a domain name. |
ES Port | Yes | The access port of the Elasticsearch cluster. Generally, the port is 9200. |
Username | No | Elasticsearch username. This setting is required only if user authentication is enabled for the Elasticsearch cluster. |
Password | No | Elasticsearch user password. This setting is required only if user authentication is enabled for the Elasticsearch cluster. |


Parameter | Description |
Import Rule Name | The name of the imported configuration. |
Index List | The indexes to be imported. Separate multiple indexes with commas (,), such as index1,index2,index3. A maximum of 200 indexes are supported. |
ES Query Statement | The query statement used to filter data. Only data that meets the query conditions will be imported to CLS. Specify * or leave it blank to import all data without filtering. The query statement must comply with the Elasticsearch query_string format. For more details, see Query string query. |
Import Mode | Supports importing historical data or new data: Import Historical Data: The task will be completed after data import is finished. Import New Data: The import task will run continuously. If you select Import New Data, you must specify a time field. |
Log Time Source | Supports selecting Log Collection Time and Specify Log Fields. Log Collection Time: The time when logs are imported to CLS is used as the log timestamp. Specify Log Fields: Specify the field name representing time in the log. The value of this field will be used as the log timestamp. Note: When the collection time is used as the time field, sorting by _id needs to be enabled for the Elasticsearch cluster. |
Log Time Field | This field is required only when Log Time Source is Specify Log Fields. Specify the field name representing time in the log. The value of this field will be used as the log timestamp. Note: The specified time field needs to be of the keyword type. If the time field type is text, nested, object, or binary, sorting will not be supported, thus resulting in data import failure. |
Time Format for Parsing | After confirming the time field in the log, you need to further specify the time format to parse the value of the time field. For details, see Configuring the Time Format. |
Time zone of the time field | After confirming the time field and format in the log, you need to select one of the following two time zone standards: UTC (Coordinated Universal Time) GMT (Greenwich Mean Time) |
Import Time Range | Specify the time range of logs to import. This configuration is only valid if a time field is set. |
Start Time | This option is available only when the import mode is set to Import New Data. Specify the start time for data import. |
Maximum Data Latency | Specify the maximum latency from data generation to writing to Elasticsearch. The default value is 600s, and the maximum value is 3600s. This configuration is valid only when the import mode is set to Import New Data. If the set value is smaller than the actual latency, some data cannot be imported from Elasticsearch to CLS. |
Check Cycle | Check cyclel for new data in Elasticsearch. The default value is 300s, and the minimum value is 60s. |




Field | Description |
__TAG__.es_url | The URL address of the Elasticsearch cluster from which logs are generated. |
__TAG__.es_index | The index information of the log source. |
Limit | Description |
Size of a single log | The maximum size of a single log that can be imported is 1 MB. The part exceeding this limit will be discarded. |
Number of import tasks | A single topic supports a maximum of 100 Elasticsearch import tasks. |
Number of imported indexes | A single task supports importing a maximum of 200 Elasticsearch indexes. |
Last updated:2024-09-20 17:48:27
Network Type | address |
Public Network | https:// ${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/write |
Private Network | https:// ${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/write |
${region} with the region where the metric topic is located, such as ap-beijing. For more region abbreviations, see Available Regions. Currently, only the regions of Beijing, Shanghai, Guangzhou, and Nanjing are supported.${topicId} with the metric topic ID, such as 0e69453c-0727-4c9c-xxxx-ea51b10d2aba. You can find the topic ID in the Metric Topic List.${SecretId}${SecretKey}{"version": "2.0","statement": [{"effect": "allow","action": ["cls:MetricsRemoteWrite"],"resource": ["*"]}]}
[[outputs.http]]## Reporting address: Replace ${region} and ${topicId}. This example uses the public network address; if network conditions allow, it is recommended to use the private network address.## Private network address URL = https://${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/writeurl = "https://${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/write"## Authentication information: Replace ${SecretId} and ${SecretKey}.username = "${SecretId}"password = "${SecretKey}"## Do not modify the Telegraf output data format configuration.data_format = "prometheusremotewrite"[outputs.http.headers]Content-Type = "application/x-protobuf"Content-Encoding = "snappy"X-Prometheus-Remote-Write-Version = "0.1.0"
./vmagent-prod \-remoteWrite.url=https://${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/write \-remoteWrite.basicAuth.username=${SecretId} \-remoteWrite.basicAuth.password=${SecretKey}
https://${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/write.# Reporting address: Replace ${region} and ${topicId}. This example uses the public network address; if network conditions allow, it is recommended to use the private network address.# Private network address URL: https://${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/writeurl: https://${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/write# Authentication information: Replace ${SecretId} and ${SecretKey}.basic_auth:username: ${SecretId}password: ${SecretKey}# Data write policy: Including caching and retry mechanisms, the following configuration is recommended for handling large data volumes.queue_config:capacity: 20480min_shards: 100max_samples_per_send: 2048batch_send_deadline: 20smin_backoff: 100msmax_backoff: 5s
kubectl create secret generic kubepromsecret \--from-literal=username=${SecretId} \--from-literal=password=${SecretKey} \-n monitoring
${SecretId} and ${SecretKey} in the command.-n monitoring to the correct namespace.kube-prometheus/manifests/prometheus-prometheus.yaml.remoteWrite:- url: "https://${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/write"basicAuth:username:name: kubepromsecretkey: usernamepassword:name: kubepromsecretkey: password
${region} and ${topicId} in the configuration. This example uses the public network address; if network conditions allow, it is recommended to use the private network address.URL: https://${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/writequeueConfig:capacity: 204800minShards: 100maxShards: 2048maxSamplesPerSend: 4096batchSendDeadline: 30sminBackoff: 100msmaxBackoff: 5s
kubectl apply -f prometheus-prometheus.yaml -n monitoring
prometheus-prometheus.yaml with the correct configuration file path.-n monitoring to the correct namespace.Last updated:2024-01-20 16:46:19
Feature | STANDARD_IA | STANDARD |
Index creation | ✓ (supports only full-text indexes) | ✓ |
Context search | ✓ | ✓ |
Quick analysis | × | ✓ |
Full-text search | ✓ (responds in 2 seconds for searches in 100 million records) | ✓ (responds in 0.5 second for searches in 100 million records) |
Key-value search | × | ✓ |
Log download | ✓ | ✓ |
SQL analysis | × | ✓ |
Dashboard | × | ✓ |
Monitoring alarm | × | ✓ |
Shipping to COS | ✓ | ✓ |
Shipping to CKafka | ✓ | ✓ |
Shipping to ES | ✓ | ✓ |
Shipping to SCF | ✓ | ✓ |
Log consumption | ✓ | ✓ |
Data processing | ✓ | ✓ |
Last updated:2023-02-16 17:31:08
Last updated:2024-09-20 17:48:27
requests_total{method="POST", handler="/messages"} 217
# HELP nginx_http_requests_total The total number of HTTP requests# TYPE nginx_http_requests_total counternginx_http_requests_total 10234# HELP nginx_http_requests_duration_seconds The HTTP request duration in seconds# TYPE nginx_http_requests_duration_seconds histogramnginx_http_requests_duration_seconds_bucket{le="0.005"} 2405nginx_http_requests_duration_seconds_bucket{le="0.01"} 5643nginx_http_requests_duration_seconds_bucket{le="0.025"} 7890nginx_http_requests_duration_seconds_bucket{le="0.05"} 9234nginx_http_requests_duration_seconds_bucket{le="0.1"} 10021nginx_http_requests_duration_seconds_bucket{le="0.25"} 10234nginx_http_requests_duration_seconds_bucket{le="0.5"} 10234nginx_http_requests_duration_seconds_bucket{le="1"} 10234nginx_http_requests_duration_seconds_bucket{le="2.5"} 10234nginx_http_requests_duration_seconds_bucket{le="5"} 10234nginx_http_requests_duration_seconds_bucket{le="10"} 10234nginx_http_requests_duration_seconds_bucket{le="+Inf"} 10234nginx_http_requests_duration_seconds_sum 243.56nginx_http_requests_duration_seconds_count 10234# HELP nginx_http_connections Number of HTTP connections# TYPE nginx_http_connections gaugenginx_http_connections{state="active"} 23nginx_http_connections{state="reading"} 5nginx_http_connections{state="writing"} 7nginx_http_connections{state="waiting"} 11# HELP nginx_http_response_count_total The total number of HTTP responses sent# TYPE nginx_http_response_count_total counternginx_http_response_count_total{status="1xx"} 123nginx_http_response_count_total{status="2xx"} 9123nginx_http_response_count_total{status="3xx"} 456nginx_http_response_count_total{status="4xx"} 567nginx_http_response_count_total{status="5xx"} 65# HELP nginx_up Is the Nginx server up# TYPE nginx_up gaugenginx_up 1
Restriction Item | Description |
Metric name | Supports English letters, numbers, underscores, and colons. It should conform to the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. |
Label name | Supports English letters, numbers, and underscores. It should conform to the regular expression [a-zA-Z_][a-zA-Z0-9_]*. |
Label value | No special restrictions, supporting all types of Unicode characters. |
Sample value | A float64 type value |
Sample timestamp | Millisecond precision |
Query Concurrency | A single metric topic supports up to 15 concurrent queries. |
Query data volume | A single query can involve up to 200,000 time series, with a maximum of 11,000 data points per time series in the query results. |
Metric upload frequency control | 25000QPS |
Metric upload flow control | 250MB/s |
Last updated:2024-09-20 17:48:27

${SecretId}${SecretKey}{"version": "2.0","statement": [{"effect": "allow","action": ["cls:MetricsSeries","cls:MetricsQueryExemplars","cls:MetricsLabelValues","cls:MetricsQueryRange","cls:MetricsLabels","cls:MetricsQuery"],"resource": ["*"]}]}
# Reading address: Replace ${region} and ${topicId}. This example uses the public network address; if network conditions allow, it is recommended to use the private network address.# Private network address URL: https://${region}.cls.tencentyun.com/prometheus/${topicId}/api/v1/readurl: https://${region}.cls.tencentcs.com/prometheus/${topicId}/api/v1/read# Authentication information: Replace ${SecretId} and ${SecretKey}.basic_auth:username: ${SecretId}password: ${SecretKey}
${SecretId}${SecretKey}{"version": "2.0","statement": [{"effect": "allow","action": ["cls:MetricsRemoteRead"],"resource": ["*"]}]}
Last updated:2024-09-20 17:48:27
prometheus_http_requests_total records the number of requests made to various Prometheus APIs with different response status codes.# HELP prometheus_http_requests_total Counter of HTTP requests.# TYPE prometheus_http_requests_total counterprometheus_http_requests_total{code="200",handler="/api/v1/label/:name/values"} 7prometheus_http_requests_total{code="200",handler="/api/v1/query"} 19prometheus_http_requests_total{code="200",handler="/api/v1/query_range"} 27prometheus_http_requests_total{code="200",handler="/graph"} 11prometheus_http_requests_total{code="200",handler="/metrics"} 8929prometheus_http_requests_total{code="200",handler="/static/*filepath"} 52prometheus_http_requests_total{code="302",handler="/"} 1prometheus_http_requests_total{code="400",handler="/api/v1/query_range"} 6
groups:- name: examplerules:- record: code:prometheus_http_requests_total:sumexpr: sum by (code) (prometheus_http_requests_total)
sum by (code) (prometheus_http_requests_total) is the metric query statement (PromQL), which calculates the sum of request counts grouped by status code.code:prometheus_http_requests_total:sum is the generated metric name, which can be customized. You can use this name in subsequent queries to directly retrieve the precomputed metric.Configuration Item | Description |
Service Log | Saves the task running logs in the cls_service_log log topic, making it easier to monitor the task's operation status. This log topic is free, and it is recommended to enable it. |
Query Statement | The PromQL statement to be executed. The pre-aggregation task will run this statement at scheduled intervals to retrieve the execution results. |
Indicator Name | The result of the execution statement will be stored under this metric name, which can be used for subsequent data queries. It supports English letters, numbers, underscores, and colons, and should conform to the regular expression [a-zA-Z_:][a-zA-Z0-9_:]*. |
Custom Dimension | Adds dimensions to the metric. If there is a conflict between the custom dimension and the dimension names in the execution statement results, the custom dimension takes precedence. |
Scheduling cycle | The execution interval for the pre-aggregation task, with the range from 1 to 1440 minutes. It is recommended to use a 1-minute interval. |
Advanced Settings | Target Metric Topic: Specifies where the pre-aggregated metric data will be stored. By default, it is stored in the current topic. If you want to store this data separately (e.g., to set a different retention period for this portion of data), you can store it in another metric topic. Delayed Execution: Since there may be delays in metric data collection, you can set a delayed execution to ensure the data is fully collected before the pre-aggregation task runs. The default delay is 30 seconds. |
Configuration Item | Description |
Enabling Status | Indicates whether the task needs to run. Tasks that are not running will not generate pre-aggregated result data. |
Service Log | Saves the task running logs in the cls_service_log log topic, making it easier to monitor the task's operation status. This log topic is free, and it is recommended to enable it. |
Execution Interval | The execution interval for the pre-aggregation task, with the range from 1 to 1440 minutes. It is recommended to use a 1-minute interval. |
YAML Configuration | Compatible with Prometheus Recording Rule YAML. The interval only supports a range from 1 to 1440 minutes. For example:
|
Advanced Settings | Target Metric Topic: Specifies where the pre-aggregated metric data will be stored. By default, it is stored in the current topic. If you want to store this data separately (e.g., to set a different retention period), you can store it in another metric topic. Delayed Execution: Since there may be delays in metric data collection, you can set a delayed execution to ensure the data is fully collected before the pre-aggregation task runs. The default delay is 30 seconds. |
Last updated:2024-12-20 16:11:32
error, counting the number of logs by URL grouping, calculating PV change trend, etc. It is the most commonly used function in the Cloud Log Service.error to search for all logs with errors.level:error AND timeCost:>1000 to search for logs whose level is error and that consume time (timeCost) greater than 1,000 ms.status:404 | select count(*) as logCounts to get the number of logs whose response status code is 404.Last updated:2025-11-19 20:07:19
|. To search for logs only without statistical analysis, omit the vertical bar | and SQL statement.[Search condition] | [SQL statement]
status:404 to search for application request logs with response status code 404. If the search condition is empty or *, it indicates there is no search condition, and all logs are searched for.status:404 | select count(*) as logCounts to count the number of logs with response status code 404. errorMessage cannot be matched with error, as they are different segments. In this case, you need to add a wildcard and search for it with error*. For more information on segments and examples, see Segment and Index.Syntax | Description |
key:value | Key-value search, which indicates to query logs with a key field whose value contains the value, such as level:ERROR. |
value | Full-text search, which indicates to query logs with the full text containing the value, such as ERROR. |
AND | Logical AND operator, which is case-insensitive, such as level:ERROR AND pid:1234. |
OR | Logical OR operator, which is case-insensitive, such as level:ERROR OR level:WARNING level:(ERROR OR WARNING). |
NOT | Logical NOT operator, which is case-insensitive, such as level:ERROR NOT pid:1234 level:ERROR AND NOT pid:1234. |
() | Parentheses, which control the precedence of logical operations, such as level:(ERROR OR WARNING) AND pid:1234.Note: When parentheses are not used, AND has a higher priority than OR. |
" " | Phrase search, which encloses a string in double quotation marks to match logs that contain all the words in the string in the same sequence, such as name:"john Smith".A phrase search has no logical operators, and the phrase used is equivalent to the query character, such as name:"and". |
' ' | Phrase search, which encloses a string in single quotation marks and is equivalent to "". When the phrase to be searched for contains double quotation marks, single quotation marks can be used to enclose the phrase to avoid syntax errors, such as body:'user_name:"bob"'. |
* | Fuzzy search, which is used to match zero, one, or multiple characters, such as host:www.test*.com. Fuzzy prefix search is not supported. |
> | Range operator, which indicates the left operand is greater than the right operand, such as status>400 or status:>400. |
>= | Range operator, which indicates the left operand is greater than or equal to the right operand, such as status>=400 or status:>=400. |
< | Range operator, which indicates the left operand is less than the right operand, such as status<400 or status:<400 . |
<= | Range operator, which indicates the left operand is less than or equal to the right operand, such as status<=400 or status:<=400. |
= | Range operator, which indicates the left operand is equal to the right operand, such as status=400 (equivalent to status:400). |
\ | Escape symbol, the escaped character represents the symbol itself. When the retrieved value contains spaces, :, (, ), >, =, <, ", ', or *, it needs to be escaped. For example: body:user_name\:bobWhen using double quotes for phrase search, only " and * need to be escaped.When using single quotes for phrase search, only ' and * need to be escaped.Unescaped * represents fuzzy retrieval. |
key:* | Field of the text type: Queries logs containing the field (key), no matter whether the value is empty, such as url:*.Field of the long/double type: Queries logs containing the field (key) whose value is not empty, such as response_time:*. |
key:"" | Field of the text type: Queries logs containing the field (key) whose value is empty (the value is also empty if it contains only delimiters), such as url:"".Field of the long/double type: Queries logs not containing the field (key) or containing the field whose value is empty (equivalent to NOT key:*). |
Sample | Statement |
Logs from a specified server | __SOURCE__:127.0.0.1 or __SOURCE__:192.168.0.* |
Logs from a specified file | __FILENAME__:"/var/log/access.log" |
Logs containing ERROR | ERROR |
Logs of failures (with a status code greater than 400) | status>400 |
Logs of failed GET requests (with a status code greater than 400) | method:GET AND status>400 |
Logs at ERROR or WARNING level | level:(ERROR OR WARNING) |
Logs except those at INFO level | NOT level:INFO |
name:"john Smith" and filepath:"/var/log/access.log", Compared with searches without quotation marks, a phrase search means that the matched logs should contain all the words in the string and in the same sequence as required in the search condition./:#1 filepath:"/var/log/access.log"#2 filepath:"/log/var/access.log"
filepath:/var/log/access.log for search, the above two logs will be matched, as it does not involve the sequence of words.filepath:"/var/log/access.log" for search, only the first log will be matched.filepath:"/var/log/acc*.log" but not in the beginning of words such as filepath:"/var/log/*cess.log".* to match zero, one, or multiple characters, for example:IP:192.168.1.* can be used to match 192.168.1.1 and 192.168.1.34.host:www.te*t.com can be used to match www.test.com and www.telt.com.* cannot be used at the beginning of a word; that is, fuzzy prefix search is not supported.long or double type support a value range but not the asterisk * for a fuzzy search, such as status>400 and status<500.host:www.test.com, host:m.test.com, and you need to query logs containing test in the middle, you can add the prefix . to search for logs with host:test.* | select * where strpos(host,'test')>0, but this approach has poorer performance compared with retrieval conditions and is not suitable for scenarios with large log data volumes.filepath:"/var/log/acc*.log" but not in the beginning of words such as filepath:"/var/log/*cess.log". In addition, wildcards in phrase searches can only match the first 128 words meeting the search condition and return all logs containing these 128 words. The more specific the words, the more accurate the results. This restriction is not applicable to non-phrase searches.Syntax | Description |
AND | Logical AND operator, such as level:ERROR AND pid:1234. |
OR | Logical OR operator, such as level:ERROR OR level:WARNING. |
NOT | Logical NOT operator, such as level:ERROR NOT pid:1234. |
() | Grouping operator, which controls the precedence of logical operations, such as (ERROR OR WARNING) AND pid:1234. |
: | Colon, which is used for key-value search, such as level:ERROR. |
"" | Double quotation marks, which quote a phrase to match logs that contain all the words in the phrase and in the same sequence, such as name:"john Smith". |
* | Wildcard, which is used to replace zero, one, or more characters, such as host:www.test*.com. Prefix fuzzy queries are not supported.You can also use key:* to query logs where the specified field (key) exists. key:* is equivalent to _exists_:key. |
? | Wildcard, which can match one single character, such as host:www.te?t.com. Similar to *, it does not support prefix fuzzy queries. |
> | Range operator, which indicates the left operand is greater than the right operand, such as status:>400. |
>= | Range operator, which indicates the left operand is greater than or equal to the right operand, such as status:>=400. |
< | Range operator, which indicates the left operand is less than the right operand, such as status:<400. |
<= | Range operator, which indicates the left operand is less than or equal to the right operand, such as status:<=400. |
TO | Logical TO operator, such as request_time:[0.1 TO 1.0]. |
[] | Range operator, which includes the upper and lower boundary values, such as age:[20 TO 30]. |
{} | Range operator, which excludes the upper and lower boundary values, such as age:{20 TO 30}. |
\ | Escape character. An escaped character represents the literal meaning of the character, such as url:\/images\/favicon.ico.You can also use "" to wrap special characters as a whole, e.g., url:"/images/favicon.ico". Note that the characters in the double quotation marks are considered as a phrase to match logs that contain all the words in the phrase and in the same sequence. |
_exists_ | \_exists\_:key returns logs that contains key. For example, _exists_:userAgent means to return logs that contains the userAgent field. |
AND and OR represent logical search operators, while and and or are regarded as common text.OR logic. For example, warning error is equivalent to warning OR error.() to group search conditions and clarify the precedency when using the "AND" and "OR" operators, such as (ERROR OR WARNING) AND pid:1234.Sample | Statement |
Logs from a specified server | __SOURCE__:127.0.0.1 or __SOURCE__:192.168.0.* |
Logs from a specified file | __FILENAME__:"/var/log/access.log" or __FILENAME__:\/var\/log\/*.log |
Logs containing ERROR | ERROR |
Logs of failures (with a status code greater than 400) | status:>400 |
Failed logs in the GET request (with a status code greater than 400) | method:GET AND status:>400 |
Logs at ERROR or WARNING level | level:ERROR OR level:WARNING |
Logs except those at INFO level | NOT level:INFO |
Logs from 192.168.10.10 but except those at INFO level | __SOURCE__:192.168.10.10 NOT level:INFO |
Logs from the /var/log/access.log file on 192.168.10.10 but except those at INFO level | (__SOURCE__:192.168.10.10 AND __FILENAME__:"/var/log/access.log.*") NOT level:INFO |
Logs from 192.168.10.10 and at ERROR or WARNING level | __SOURCE__:192.168.10.10 AND (level:ERROR OR level:WARNING) |
Logs with a status code of 4XX | status:[400 TO 500} |
Logs with the container name nginx in the metadata | __TAG__.container_name:nginx |
Logs with the container name nginx in the metadata, and request latency greater than 1s | __TAG__.container_name:nginx AND request_time:>1 |
Logs containing the message field | message:* or _exists_:message |
Logs that do not contain the message field | NOT _exists_:message |
* to match zero, single, or multiple characters, or using the question mark ? to match a single character. The following are examples:IP:192.168.1.* can be used to match 192.168.1.1 and 192.168.1.34.host:www.te*t.com can be used to match www.test.com and www.telt.com.* or question mark ? cannot be used at the beginning of a word, i.e. prefix fuzzy searches are not supported.long or double type does not support an asterisk * or question mark ? for fuzzy search, but it supports a value range for fuzzy search, such as status:[400 TO 500}.host:www.test.com, host:m.test.com, and you need to query logs containing test in the middle, you can add the prefix . to search for logs with host:test.LIKE syntax: For example, you can use * | select * where host like '%test%'. However, this method delivers lower performance than the search condition method and is not suitable for scenarios with large volume of log data.Feature | Lucene | CQL |
Logical operator | Only uppercase letters are supported, such as AND, NOT, and OR. | Both uppercase and lowercase letters are supported, such as AND, and, NOT, not, OR, and or. |
Symbol escape | Many symbols need to be escaped. For example, to search for /book/user/login/, you need to escape it as \/book\/user\/login\/. | Few symbols need to be escaped, and you can search for /book/user/login/ directly. |
Keyword search | The logical relationship between segments in a keyword is OR. For example, if the delimiter is /, /book/user/login/ is equivalent to book OR user OR login, and many irrelevant logs will be matched. | The logical relationship between segments in a keyword is AND. For example, if the delimiter is /, /book/user/login/ is equivalent to book AND user AND login, which is in line with search habits. |
Phrase search | Phrase searches do not support wildcards. For example, "/book/user/log*/" cannot match /book/user/login/ and /book/user/logout/. | Phrase searches support wildcards. For example, "/book/user/log*/" can match /book/user/login/ and /book/user/logout/. |
Numeric range search | Use the syntax in the form of timeCost:[20 TO 30] for retrieval. | Use the syntax in the form of timeCost>=20 AND timeCost<=30 for retrieval. |
Logs with existing fields | Search using _exists_:key, where key is the field name. | Search using key:*, where key is the field name. |
Syntax | Description |
Selects data from a table. It selects eligible data from the current log topic by default. Example: `level:ERROR | |
Specifies an alias for a column (KEY). Example: `level:ERROR | |
Combines aggregate functions to group results based on one column (KEY) or more. Example: `level:* | |
Sorts results according to the specified KEY. Example: `level:* | |
Limits the amount of data returned by the SELECT statement. Example: `level:* | |
Filters the original data found. Example: `level:ERROR | |
Filters grouped and aggregated data. The difference between HAVING and WHERE is that HAVING is executed on data after grouping (GROUP BY) and before ordering (ORDER BY) while WHERE is executed on the original data before aggregate. Example: `level:* | |
In some complex statistical analysis scenarios, you need to perform statistical analysis on the original data first and then perform secondary statistical analysis on the analysis results. In this case, you need to nest a SELECT statement into another SELECT statement. This query method is called nested subquery. Example: `* |
SELECT is equivalent to select.'', while characters that are unsigned or included in double quotation marks "" indicate field or column names. For example, 'status' indicates the string status, while status or "status" indicates the log field status.', you need to use '' (two single quotation marks) to represent the single quotation mark itself. For example '{''version'': ''1.0''}' indicates the raw string {'version': '1.0'}. No special processing is required if the string itself contains a double quotation mark ".Syntax | Description |
String concatenation, splitting, length calculation, case conversion, and more. | |
Time format conversion, statistics by time, time interval calculation, and more. | |
Parsing IPs to obtain geographic information and more. | |
Obtaining domain names and parameters from URLs, encoding/decoding URLs, and more. | |
Calculating the log count, maximum value, minimum value, average value, and more. | |
Calculating the number of unique values, percentile values (e.g., p95/p90), and more. | |
Variable type conversion; often used in functions that have special requirements on the variable types of parameters. | |
AND, OR, NOT, and other logical operations. | |
Mathematical operators (+, -, *, /, etc.) and comparison operators (>, <, etc.). | |
Condition determination expressions such as CASE WHEN and IF. | |
Getting the elements in an array, and more. | |
Comparing the calculation result of the current time period with the calculation result of a time period n seconds before. | |
Getting JSON objects, converting JSON types, and more. |
Sample | Statement |
Number of logs of failed GET requests (with a status code greater than 400) | method:GET AND status:>400 | select count(*) as errorCount |
Number of logs of failed GET requests (with a status code greater than 400) per minute | method:GET AND status:>400 | select histogram(__TIMESTAMP__, interval 1 minute) as analytic_time_minute, count(*) as errorCount group by analytic_time_minute limit 1000 |
Top five URLs with the largest number of requests | * | select URL, count(*) as log_count group by URL order by log_count desc limit 5 |
Count the proportion of ERROR logs | * | select round((count_if(upper(Level) = 'ERROR'))*100.0/count(*),2) as "ERROR log percentage (%)" |
Number of requests of each province | * | select ip_to_province(client_ip) as province , count(*) as PV group by province order by PV desc limit 1000 |
Metric | Limit | Remarks |
Number of SQL results | Each SQL execution can return up to 10,000 results. | |
Memory usage | Each SQL execution can occupy up to 3 GB of server memory. | Usually, this limit can be triggered when group by, distinct(), or count(distinct()) is used, because the fields with statistics collected have too many values after deduplication via group by or distinct(). We recommend that you optimize the query statement and use fields with fewer values for group statistics, or use approx_distinct() instead of count(distinct()). |
Last updated:2024-01-20 17:25:15
__PKG_LOGID__ in the above figure, cannot be clicked because statistics are not enabled for them.Last updated:2024-01-20 17:25:15
AS clause is used to specify an alias for a column (KEY).* | SELECT column name (KEY) AS alias
* | SELECT COUNT(*) AS PV
Last updated:2024-01-20 17:25:15
GROUP BY syntax, together with an aggregate function, is used to group analysis results by one or more columns.* | SELECT column, aggregate function GROUP BY [ column name | alias | serial number ]
SELECT statement containing the GROUP BY syntax, you can select only the GROUP BY column or an aggregate calculation function, but not a non-GROUP BY column. For example, * | SELECT status, request_time, COUNT(*) AS PV GROUP BY status is an invalid analysis statement because request_time is not a GROUP BY column.GROUP BY syntax supports grouping by column name, alias, or serial number, as described in the following table:Parameter | Description |
Column name | Group data by log field name or aggregate function calculation result column. The syntax supports grouping data by one or multiple columns. |
Alias | Group data by alias of the log field name or aggregate function calculation result. |
Serial number | Serial number (starting from 1) of a column in the SELECT statement.For example, the serial number of the status column is 1, and therefore the following statements are equivalent:* | SELECT status, count(*) AS PV GROUP BY status * | SELECT status, count(*) AS PV GROUP BY 1 |
Aggregate function | The GROUP BY syntax is usually used together with aggregate functions such as MIN, MAX, AVG, SUM, and COUNT. For more information, please see Aggregate Function. |
* | SELECT status, count(*) AS pv GROUP BY status
* |SELECTdate_trunc('minute',cast(__TIMESTAMP__ as timestamp)) AS dt,count(*) AS pvGROUP BYdtORDER BYdtlimit10
\_\_TIMESTAMP\_\_ field is the reserved field in CLS and indicates the time column. **dt** is the alias of date_trunc('minute', cast(\_\_TIMESTAMP\_\_ as timestamp)). For more information on the date_trunc() function, see Time Truncation Function.limit 10 indicates up to 10 rows of results are obtained. If the LIMIT syntax is not used, CLS obtains 100 rows of results by default.\_\_TIMESTAMP\_\_ field.* | SELECT histogram( cast(TIMESTAMP as timestamp), interval 5 minute ) as dt, count(*) as pv, count( distinct(remote_addr) ) as uv group by dt order by dt
Last updated:2024-01-20 17:25:15
LIMIT syntax is used to limit the number of rows in the output result.limit N
offset S limit N
* | select status, count(*) as pv group by status limit 10
* | select status, count(*) as pv group by status offset 2 limit 40
Metric | Restriction | Remarks |
Number of SQL results | Each SQL returns up to 10,000 results. | Default: 100; Maximum: 10,000 |
Last updated:2024-01-20 17:25:15
ORDER BY syntax is used to sort analysis results by a specified column name.ORDER BY column name [DESC | ASC]
ORDER BY column name 1[DESC | ASC], column name 2[DESC | ASC].DESC or ASC, the system sorts the analysis results in ascending order.Parameter | Description |
Column name | Group data by log field name or aggregate function calculation result column. |
DESC | Sort data in descending order. |
ASC | Sort data in ascending order. |
* | SELECT status, count(*) AS pv GROUP BY status ORDER BY pv DESC
* | SELECT remote_addr, avg(request_time) as request_time group by remote_addr order by request_time ASC LIMIT 10
Last updated:2024-01-20 17:25:15
SELECT statement is used to select data from a table. It selects eligible data from the current log topic by default.* | SELECT [Column name(KEY)]
remote_addr and method from the log data:* | SELECT remote_addr, method
* | SELECT *
SELECT can also be followed by arithmetic expressions; for example, you can query the download speed of log data:speed) = total number of bytes sent (body_bytes_sent) / request time (request_time)* | SELECT body_bytes_sent / request_time AS speed
remote_addr. If a field in a log has a non-compliant name, you need to surround the name by "". You can also specify an alias for the field with AS syntax in SQL.remote_addr, which conforms to SQL's column naming conventions, it can be queried by SELECT:* | SELECT remote_addr
__TAG__.pod_label_qcloud-app, which does not conform to SQL's column naming conventions, it needs to be surrounded by "":* | SELECT "__TAG__.pod_label_qcloud-app"
__TIMESTAMP__, which does not conform to SQL's column naming conventions, it needs to be surrounded by "" and specified with an alias through the AS syntax:* | SELECT "__TIMESTAMP__" AS log_time
Last updated:2024-01-20 17:25:15
WHERE statement is used to extract the logs that meet the specified conditions.* | SELECT column (KEY) WHERE column (KEY) operator value
=, <>, >, <, >=, <=, BETWEEN, IN, or LIKE.status:>400 | select count(*) as logCounts instead of * | select count(*) as logCounts where status>400 to get the statistical result faster.WHERE statement does not allow the AS clause. For example, if level:* | select level as log_level where log_level='ERROR' is run, an error will be reported because the statement does not comply with the SQL-92 specifications.* | SELECT * WHERE status > 400
* | SELECT count(*) as count WHERE method='GET' and remote_addr='192.168.10.101'
* | SELECT round(sum(body_bytes_sent) / count(body_bytes_sent), 2) AS avg_size WHERE url like '%.mp4'
Last updated:2024-01-20 17:25:15
HAVING syntax is used to filter grouped and aggregated data. The difference between HAVING and WHERE is that HAVING is executed on data after grouping (GROUP BY) and before ordering (ORDER BY) while WHERE is executed on the original data before aggregation.* | SELECT column, aggregate function GROUP BY [ column name | alias | serial number ] HAVING aggregate function operator value
=, <>, >, <, >=, <=, BETWEEN, IN, or LIKE.* |selectavg(responseTime) as time_avg,URLgroup byURLhavingavg(responseTime)> 1000order byavg(responseTime) desclimit10000
Last updated:2024-01-20 17:25:15
SELECT statement into another SELECT statement. This query method is called nested subquery.* | SELECT key FROM (subquery)
86400 indicates the current time minus 86400 seconds (1 day).* | SELECT compare(PV, 86400) FROM (SELECT count(*) AS PV)
SELECT count(*) AS PV is level-1 statistical analysis: analyze the website PV based on raw logs.SELECT compare(PV, 86400) FROM is level-2 statistical analysis: perform secondary statistical analysis based on the PV result of the level-1 statistical analysis. Use the compare function to obtain the website PV of the day before.* |SELECT compare[1] AS today, compare[2] AS yesterday, compare[3] AS ratioFROM (SELECT compare(PV, 86400) AS compareFROM (SELECT COUNT(*) AS PV))
SELECT compare[1] AS today, compare[2] AS yesterday, compare[3] AS ratio FROM is to get the value of a specified position in the result of the compare function based on the array subscript.Last updated:2025-11-19 20:07:19
'', while characters that are unsigned or included in double quotation marks "" indicate field or column names. For example, 'status' indicates the string status, while status or "status" indicates the log field status.', you need to use '' (two single quotation marks) to represent the single quotation mark itself. For example '{''version'': ''1.0''}' indicates the raw string {'version': '1.0'}. No special processing is required if the string itself contains a double quotation mark ".key parameters indicate log field names.Function | Description | Example |
chr(number) | Returns characters that match the ASCII code point (bit) specified by the input parameter. The return value is of the VARCHAR type. | Return characters that match ASCII code bit 77: * | SELECT chr(77) |
codepoint(string) | Converts ASCII field values to BIGINT values. The return value is of the integer type. | Convert character values in ASCII code to their corresponding positions: * | SELECT codepoint('M') |
concat(key1, ..., keyN) | Concatenates strings key1, key2, ...keyN. The concatenation effect is consistent with that of the || connectors. The return value is of the VARCHAR type. Note that when the random string is null, the return value is null. To skip null, use concat_ws. | Concatenate multiple strings into one: * | SELECT concat(remote_addr, host, time_local) |
concat_ws(split_string,key0, ..., keyN) | Concatenates strings key1, key2, ...keyN using split_string as the separator. split_string can be a string or variable. If split_string is null, null values in key1, key2, ...keyN are skipped. The return result is of the VARCHAR type. | Concatenate multiple strings using / as the separator: * | SELECT concat_ws('/', remote_addr,host,time_local) |
concat_ws(split_string, array(varchar)) | Concatenates elements in an array into a string using split_string as the separator. If split_string is null, the result is null and null values in the array are skipped. The return result is of the VARCHAR type.Note: In this function, the array(varchar) parameter is an array, not a string. | Concatenate elements in an array into a string using # as the separator: (in this example, the output of the split function is an array)* | select concat_ws('#',split('cloud.tencent.com/product/cls', '/')) |
format(format,args...) | Formats the output of the args parameter using the format format. The return value is of the VARCHAR type. | Format the output of the remote_addr and host parameters using the format of IP address: %s, Domain name: %s:* | SELECT format('IP address: %s, Domain name: %s', remote_addr, host) |
hamming_distance(key1, key2) | Returns the Hamming distance between the key1 and key2 strings. Note that the two strings must have the same length. The return value is of the BIGINT type. | Return the Hamming distance between the remote_addr and remote_addr strings:* | SELECT hamming_distance(remote_addr, remote_addr) |
length(key) | Returns the length of a string. The return value is of the BIGINT type. | Return the length of the http_user_agent string:* | SELECT length(http_user_agent) |
levenshtein_distance(key1, key2) | Returns the Levenshtein distance between the key1 and key2 strings. The return value is of the BIGINT type. | Return the Levenshtein distance between the remote_addr and http_protocol strings:* | SELECT levenshtein_distance(remote_addr, http_protocol) |
lower(key) | Converts a string to lowercase. The return value is of the VARCHAR type in lowercase. | Convert the http_protocol string to lowercase:* | SELECT lower(http_protocol) |
lpad(key, size, padstring) | Left pads padString to a string to size characters. If size is less than the length of key, the result is truncated to size characters. size must be non-negative, and padstring must be non-empty. The return value is of the VARCHAR type. | Left pad the '0' to the remote_addr string to 32 characters:* | SELECT lpad(remote_addr, 32, '0') |
ltrim(key) | Removes all leading whitespace characters from a string. The return value is of the VARCHAR type. | Remove all leading whitespace characters from the http_user_agent string:* | SELECT ltrim(http_user_agent) |
position(substring IN key) | Returns the position of substring in a string. Positions start with 1. If the position is not found, 0 is returned. This function takes the special syntax IN as a parameter. For other information, see strpos(). The return value is of the BIGINT type. | Return the position of the 'G' characters in http_method:* | select position('G' IN http_method) |
replace(key, substring) | Removes all substring from the key string. The return value is of the VARCHAR type. | Remove all 'Oct' from the time_local string:* | select replace(time_local, 'Oct') |
replace(key, substring, replace) | Replaces all substring in a string with the replace string. The return value is of the VARCHAR type. | Replace all 'Oct' in the time_local string with '10':* | select replace(time_local,'Oct','10') |
reverse(key) | Reverses the key string. The return value is of the VARCHAR type. | Reverse the host string:* | select reverse(host) |
rpad(key, size, padstring) | Right pads padstring to a string to size characters. If size is less than the length of key, the result is truncated to size characters. size must be non-negative, and padstring must be non-empty. The return value is of the VARCHAR type. | Right pad '0' to the remote_addr string to 32 characters:* | select rpad(remote_addr, 32, '0') |
rtrim(key) | Removes all trailing whitespace characters from a string. The return value is of the VARCHAR type. | Remove all trailing whitespace characters from the http_user_agent string:* | select rtrim(http_user_agent) |
split(key, delimiter) | Splits a string using a specified delimiter and returns a string array. | Split the http_user_agent string using the '/' delimiter and return a string array:* | SELECT split(http_user_agent, '/') |
split(key, delimiter, limit) | Splits a string using a specified delimiter and returns a string array with the maximum length specified by limit. The last element in the string array always contains all the remaining part of key. limit must be a positive integer. | Split the http_user_agent string using the '/' delimiter and return a string array with the length of 10 characters:* | SELECT split(http_user_agent, '/', 10) |
split_part(key, delimiter, index) | Splits a string using a specified delimiter and returns the string at the index position in the array. Indexes start with 1. If the value of index is greater than the length of the array, null is returned. The return value is of the VARCHAR type. | Split the http_user_agent string using the '/' delimiter and return the string at position 1:* | SELECT split_part(http_user_agent, '/', 1) |
strpos(key, substring) | Returns the position of substring in a string. Positions start with 1. If the position is not found, 0 is returned. The return value is of the BIGINT type. | Return the position of 'org' in the host string:* | SELECT strpos(host, 'org') |
strpos(key, substring, instance) | Returns the position of the N-th instance of substring in the string. If instance is a negative number, the position is counted starting from the end of the string. Positions start with 1. If the position is not found, 0 is returned. The return value is of the BIGINT type. | Return the position of the first instance of 'g' in the host string:* | SELECT strpos(host, 'g', 1) |
substr(key, start) | Returns the rest of a string from the starting position start. Positions start with 1. A negative starting position is interpreted as being relative to the end of the string, for example, [...]. The return value is of the VARCHAR type. | Return the rest of the remote_user string from the second character:* | SELECT substr(remote_user, 2) |
substr(key, start, length) | Returns a substring from a string of length length from the starting position start. Positions start with 1. A negative starting position is interpreted as being relative to the end of the string. The return value is of the VARCHAR type. | Return the 2nd to 5th characters of the remote_user string:* | SELECT substr(remote_user, 2, 5) |
translate(key, from, to) | Replaces all characters in key that appear in from with characters at the corresponding position in to. If from contains repeated characters, only the first character is counted. If the characters in from do not exist in the source, the source is copied directly. If the length of from is greater than that of to, the corresponding characters will be deleted. The return value is of the VARCHAR type. | Replace the '123' characters in the remote string with the 'ABC' characters:* | SELECT translate(remote_user, '123', 'ABC') |
trim(key) | Removes leading and trailing whitespace characters from a string. The return value is of the VARCHAR type. | Remove leading and trailing whitespace characters from the http_cookies string:* | SELECT trim(http_cookies) |
upper(key) | Converts a string to uppercase. The return value is of the VARCHAR type in uppercase. | Convert the lowercase characters in the host string to uppercase characters:* | SELECT upper(host) |
word_stem(word) | Returns the stem of word in the English language. The return value is of the VARCHAR type. | Return the English word of 'Mozilla': * | SELECT word_stem('Mozilla') |
word_stem(word, lang) | Returns the stem of word in the lang language. The return value is of the VARCHAR type. | Return the stem of selects in English:* | SELECT word_stem('selects', 'en') |
Function | Description |
normalize(string) | Converts a string to the NFC standard format. The return value is of the VARCHAR type. |
normalize(string, form) | Converts string to the form format. The form parameter must be keywords (NFD, NFC, NFKD, or NFKC) instead of a string. The return value is of the VARCHAR type. |
to_utf8(string) | Converts string to a UTF-8 binary string varbinary. The return value is of the VARCHAR type. |
from_utf8(binary) | Converts a binary string to a UTF-8 string. Invalid UTF-8 characters will be replaced with "U+FFFD". The return value is of the VARCHAR type. |
from_utf8(binary, replace) | Converts a binary string to a UTF-8 string. Invalid UTF-8 characters will be replaced with replace. The return value is of the VARCHAR type. |
10.135.46.111 - - [05/Oct/2015:21:14:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [05/Oct/2015:21:14:30 +0800]upstream_response_time: 0.354
* | SELECT count(*) AS pv, split_part(request_url, '?', 1) AS Path GROUP BY Path ORDER BY pv DESC LIMIT 3
* | SELECT substr(http_protocol,1,4) AS http_protocol, count(*) AS count group by http_protocol
ABC characters and return a result of the VARCHAR type:* | SELECT translate(remote_user, '123', 'ABC')
* | SELECT substr(remote_user, 2, 5)
* | SELECT strpos(http_protocol, 'H')
* | SELECT split(http_protocol, '/', 2)
* | select replace(time_local, 'Oct', '10')
Last updated:2024-03-20 11:47:49
histogram and time_series functions that adopt the UTC+8 time zone, other Unix timestamp (unixtime) conversion functions adopt the UTC+0 time zone. To use another time zone, you need to use a function with the specified time zone feature, for example, such as from_unixtime(__TIMESTAMP__/1000, 'Asia/Shanghai'), or manually add the time zone offset for unixtime, for example, date_trunc('second', cast(__TIMESTAMP__+8*60*60*1000 as timestamp)).Function | Description | Example |
current_date | Returns the current date. Return value format: YYYY-MM-DD, such as 2021-05-21Return value type: DATE | * | select current_date |
current_time | Returns the current time. Return value format: HH:MM:SS.Ms Time zone, such as 17:07:52.143+08:00Return value type: TIME | * | select current_time |
current_timestamp | Returns the current timestamp. Return value format: YYYY-MM-DDTHH:MM:SS.Ms Time zone, such as 2021-07-15T17:10:56.735+08:00[Asia/Shanghai]Return value type: TIMESTAMP | * | select current_timestamp |
current_timezone() | Returns the time zone defined by IANA (America/Los_Angeles) or the offset from UTC (+08:35). Return value type: VARCHAR, such as Asia/Shanghai | * | select current_timezone() |
localtime | Returns the local time. Return value format: HH:MM:SS.Ms, such as 19:56:36Return value type: TIME | * | select localtime |
localtimestamp | Returns the local date and time. Return value format: YYYY-MM-DD HH:MM:SS.Ms, such as 2021-07-15 19:56:26.908Return value type: TIMESTAMP | * | select localtimestamp |
now() | Returns the current date and time. This function is used in the same way as the current_timestamp function. Return value format: YYYY-MM-DDTHH:MM:SS.Ms Time zone, such as 2021-07-15T17:10:56.735+08:00[Asia/Shanghai]Return value type: TIMESTAMP | * | select now() |
last_day_of_month(x) | Returns the last day of a month. Return value format: YYYY-MM-DD, such as 2021-05-31Return value type: DATE | * | select last_day_of_month(cast(__TIMESTAMP__ as timestamp)) |
from_iso8601_date(string) | Parses an ISO 8601 formatted string into a date. Return value format: YYYY-MM-DD, such as 2021-05-31Return value type: DATE | * | select from_iso8601_date('2021-03-21') |
from_iso8601_timestamp(string) | Parses an ISO 8601 formatted string into a timestamp with a time zone. Return value format: HH:MM:SS.Ms Time zone, such as 17:07:52.143+08:00Return value type: TIMESTAMP | * | select from_iso8601_timestamp('2020-05-13') |
from_unixtime(unixtime) | Parses a Unix formatted string into a timestamp. Return value format: YYYY-MM-DD HH:MM:SS.Ms, such as 2017-05-17 01:41:15.000Return value type: TIMESTAMP | Example 1: * | select from_unixtime(1494985275) Example 2: * | select from_unixtime(__TIMESTAMP__/1000) |
from_unixtime(unixtime, zone) | Parses a Unix formatted string into a timestamp with a time zone. Return value format: YYYY-MM-DD HH:MM:SS.Ms Time zone, such as 2017-05-17T09:41:15+08:00[Asia/Shanghai]Return value type: TIMESTAMP | Example 1: * | select from_unixtime(1494985275, 'Asia/Shanghai')Example 2: * | select from_unixtime(__TIMESTAMP__/1000, 'Asia/Shanghai') |
to_unixtime(timestamp) | Parses a timestamp formatted string into a Unix timestamp. Return value type: LONG, such as 1626347592.037 | * | select to_unixtime(cast(__TIMESTAMP__ as timestamp)) |
to_milliseconds(interval) | Returns a time interval in milliseconds. Return value type: BIGINT, such as 300000 | * | select to_milliseconds(INTERVAL 5 MINUTE) |
to_iso8601(x) | Parses a date and time expression of the DATE or TIMESTAMP type into a date and time expression in the ISO8601 format. | * | select to_iso8601(current_timestamp) |
timezone_hour(timestamp) | Returns the hour offset of the timestamp's time zone. | * | SELECT current_timestamp, timezone_hour(current_timestamp) |
timezone_minute(timestamp) | Returns the minute offset of the timestamp's time zone. | * | SELECT current_timestamp, timezone_minute(current_timestamp) |
histogram(time_column, interval)
Parameter | Description |
time_column | Time column (KEY), such as \_\_TIMESTAMP\_\_. The value in this column must be a UNIX timestamp of the LONG type or a date and time expression of the TIMESTAMP type in milliseconds. If a value does not meet the requirement, use the cast function to convert the ISO 8601 formatted time string into the TIMESTAMP type, for example, cast('2020-08-19T03:18:29.000Z' as timestamp), or use the [date_parse](#date_parse) function to convert a time string of another custom type. If the time column adopts the TIMESTAMP type, the corresponding date and time expression must be in the UTC+0 time zone. If the date and time expression itself is in a different time zone, adjust it to UTC+0 by calculation. For example, if the time zone of the original time is UTC+8, use cast('2020-08-19T03:18:29.000Z' as timestamp) - interval 8 hour to adjust the time zone. |
interval | Time interval. The following time units are supported: SECOND, MINUTE, HOUR, and DAY. For example, INTERVAL 5 MINUTE indicates an interval of 5 minutes. |
* | select histogram(__TIMESTAMP__, INTERVAL 5 MINUTE) AS dt, count(*) as PV group by dt order by dt limit 1000
time_series() function can be used to group and aggregate the log data at a given interval. Its main difference from the histogram() function is that it can complete missing data in your query time window.desc sorting.time_series(time_column, interval, format, padding)
Parameter | Description |
time_column | Time column (KEY), such as \_\_TIMESTAMP\_\_. The value in this column must be a UNIX timestamp of the LONG type or a date and time expression of the TIMESTAMP type in milliseconds. If a value does not meet the requirement, use the cast function to convert the ISO 8601 formatted time string into the TIMESTAMP type, for example, cast('2020-08-19T03:18:29.000Z' as timestamp), or use the [date_parse](#date_parse) function to convert a time string of another custom type. If the time column adopts the TIMESTAMP type, the corresponding date and time expression must be in the UTC+0 time zone. If the date and time expression itself is in a different time zone, adjust it to UTC+0 by calculation. For example, if the time zone of the original time is UTC+8, use cast('2020-08-19T03:18:29.000Z' as timestamp) - interval 8 hour to adjust the time zone. |
interval | Time interval. Valid values are s (second), m (minute), h (hour), and d (day). For example, 5m indicates 5 minutes. |
format | Time format of the return result. |
padding | Value used to complete missing data. Valid values include: 0: Complete a missing value with 0null: Complete a missing value with nulllast: Complete a missing value with the value of the previous point in time next: Complete a missing value with the value of the next point in time avg: Complete a missing value with the average value of the previous and next points in time |
* | select time_series(__TIMESTAMP__, '2m', '%Y-%m-%dT%H:%i:%s+08:00', '0') as time, count(*) as count group by time order by time limit 1000
Function | Description | Example |
date_trunc(unit,x) | Truncates x to unit. x is of the TIMESTAMP type. | * | SELECT date_trunc('second', cast(__TIMESTAMP__ as timestamp)) |
Unit | Example Truncated Value | Description |
second | 2021-05-21 05:20:01.000 | - |
minute | 2021-05-21 05:20:00.000 | - |
hour | 2021-05-21 05:00:00.000 | - |
day | 2021-05-21 00:00:00.000 | Returns the zero o'clock of a specified date. |
week | 2021-05-19 00:00:00.000 | Returns the zero o'clock on Monday of a specified week. |
month | 2021-05-01 00:00:00.000 | Returns the zero o'clock on the first day of a specified month. |
quarter | 2021-04-01 00:00:00.000 | Returns the zero o'clock on the first day of a specified quarter. |
year | 2021-01-01 00:00:00.000 | Returns the zero o'clock on the first day of a specified year. |
Function | Description | Example |
extract(field FROM x) | Extracts the specified fields from the date and time expression (x). | * |select extract(hour from cast('2021-05-21 05:20:01.100' as timestamp)) |
field supports the following values: year, quarter, month, week, day, day_of_month, day_of_week, dow, day_of_year, doy, year_of_week, yow, hour, minute, second.extract(field FROM x) can be simplified to field(); for example, extract(hour from cast('2021-05-21 05:20:01.100' as timestamp)) can be simplified to hour(cast('2021-05-21 05:20:01.100' as timestamp)).Field | Extraction Result | Description | Simplified Format |
year | 2021 | Extracts the year from the target date. | year(x) |
quarter | 2 | Extracts the quarter from the target date. | quarter(x) |
month | 5 | Extracts the month from the target date. | month(x) |
week | 20 | Calculates the week of the year the target date is in. | week(x) |
day | 21 | Extracts the day from the target date by month, which is equivalent to day_of_month. | day(x) |
day_of_month | 21 | Equivalent to day. | day(x) |
day_of_week[] | 5 | Calculates the day of the week for the target date, which is equivalent to dow. | day_of_week(x) |
dow[] | 5 | Equivalent to day_of_week. | day_of_week(x) |
day_of_year | 141 | Calculates the day of the year for the target date, which is equivalent to doy. | day_of_year(x) |
doy | 141 | Equivalent to day_of_year. | day_of_year(x) |
year_of_week | 2021 | year_of_week(x) | |
yow | 2021 | Equivalent to year_of_week. | year_of_week(x) |
hour | 5 | Extracts the hour from the target date. | hour(x) |
minute | 20 | Extracts the minute from the target date. | minute(x) |
second | 1 | Extracts the second from the target date. | second(x) |
Function | Description | Example |
date_add(unit,value,timestamp) | Adds N time units ( unit) to timestamp. If value is a negative value, subtraction is performed. | * | SELECT date_add('day', -1, TIMESTAMP '2020-03-03 03:01:00')The return value is the date and time one day earlier than 2020-03-03 03:01:00, i.e., 2020-03-02 03:01:00. |
date_diff(unit, timestamp1, timestamp2) | Returns the time difference between two time expressions, for example, calculates the number of time units ( unit) between timestamp1 and timestamp2. | * |SELECT date_diff('hour', TIMESTAMP '2020-03-01 00:00:00', TIMESTAMP '2020-03-02 00:00:00')The return value is the time difference between 2020-03-01 and 2020-03-02, i.e., one day. |
unit) are supported:unit | Description |
millisecond | Millisecond |
second | Second |
minute | Minute |
hour | Hour |
day | Day |
week | Week |
month | Month |
quarter | Quarter of a year |
year | Year |
* | SELECT date_diff('second', TIMESTAMP '2020-03-01 00:00:00', TIMESTAMP '2020-03-02 00:00:00')
Function | Description | Example |
parse_duration(string) | Parses a unit value string into a duration expression. Return value type: INTERVAL, such as 0 00:00:00.043 (D HH:MM:SS.Ms) | * | SELECT parse_duration('3.81 d') |
human_readable_seconds(double) | Parses a unit value string into a duration expression. Return value type: VARCHAR, such as 1 minutes and 36 seconds | * | SELECT human_readable_seconds(96) |
Unit | Description |
ns | Nanosecond |
us | Microsecond |
ms | Millisecond |
s | Second |
m | Minute |
h | Hour |
d | Day |
* | SELECT parse_duration('3.81 d')
Function | Description | Example |
date_format(timestamp, format) | Parses a date and time string of the timestamp type into a string in the format format. | * | select date_format(cast(__TIMESTAMP__ as timestamp), '%Y-%m-%d') |
date_parse(string, format) | Parses a date and time string in the format format into the timestamp type. | * | select date_parse('2017-05-17 09:45:00','%Y-%m-%d %H:%i:%s') |
format) are supported:Format | Description |
%a | Abbreviated names of the days of the week, such as Sun and Sat |
%b | Abbreviated month name, such as Jan and Dec |
%c | Month, numeric. Value range: 1-12 |
%d | Day of the month, decimal. Value range: 01-31 |
%e | Day of the month, decimal. Value range: 1-31 |
%f | Millisecond. Value range: 0-000000 |
%H | Hour, in the 24-hour time system |
%h | Hour, in the 12-hour time system |
%I | Hour, in the 12-hour time system |
%i | Minute, numeric. Value range: 00-59 |
%j | Day of the year. Value range: 001-366 |
%k | Hour. Value range: 0-23 |
%l | Hour. Value range: 1-12 |
%M | Month name in English, such as January and December |
%m | Month name in digits, such as 01 and 02 |
%p | AM or PM |
%r | Time, in the 12-hour time system. Format: hh:mm:ss AM/PM |
%S | Second. Value range: 00-59 |
%s | Second. Value range: 00-59 |
%T | Time, in the 24-hour time system. Format: hh:mm:ss |
%v | Week of the year, where Monday is the first day of the week. Value range: 01-53 |
%W | Names of the days of the week, such as Sunday and Saturday |
%Y | Year (4-digit), such as 2020 |
%y | Year (2-digit), such as 20 |
%% | Escape character of % |
format into a date and time expression of the TIMESTAMP type, i.e., '2017-05-17 09:45:00.0':* | SELECT date_parse('2017-05-17 09:45:00','%Y-%m-%d %H:%i:%s')
Last updated:2024-03-20 11:47:49
KEY field in the following functions indicates the log field (for example, ip) and its value is an IP address. If the value is an internal IP address or an invalid field, the value cannot be parsed and is displayed as NULL or Unknown.Function | Description | Example |
ip_to_domain(KEY) | Determines whether an IP address belongs to a private or public network. Valid values are intranet (private network IP address), internet (public network IP address), and invalid (invalid IP address). | * | SELECT ip_to_domain(ip) |
ip_to_country(KEY) | Analyzes the country or region to which an IP address belongs. The country's or region's name is returned. | * | SELECT ip_to_country(ip) |
ip_to_country_code(KEY) | Analyzes the code of the country or region to which an IP address belongs. The country's or region's code is returned. | * | SELECT ip_to_country_code(ip) |
ip_to_country_geo(KEY) | Analyzes the latitude and longitude of the country or region to which an IP address belongs. The country's or region's latitude and longitude are returned. | * | SELECT ip_to_country_geo(ip) |
ip_to_province(KEY) | Analyzes the province to which an IP address belongs. The province's name is returned. | * | SELECT ip_to_province(ip) |
ip_to_province_code(KEY) | Analyzes the code of the province to which an IP address belongs. The province's administrative zone code is returned. | * | SELECT ip_to_province_code(ip) |
ip_to_province_geo(KEY) | Analyzes the latitude and longitude of the province to which an IP address belongs. The province's latitude and longitude are returned. | * | SELECT ip_to_province_geo(ip) |
ip_to_city | Analyzes the city to which an IP address belongs. The city's name is returned. | * | SELECT ip_to_city(ip) |
ip_to_city_code | Analyzes the code of the city to which an IP address belongs. The city's administrative zone code is returned. Currently, cities in Taiwan (China) and outside China are not supported. | * | SELECT ip_to_city_code(ip) |
ip_to_city_geo | Analyzes the latitude and longitude of the city to which an IP address belongs. The city's latitude and longitude are returned. Currently, cities in Taiwan (China) and outside China are not supported. | * | SELECT ip_to_city_geo(ip) |
ip_to_provider(KEY) | Analyzes the ISP to which an IP address belongs. The ISP's name is returned. | * | SELECT ip_to_provider(ip) |
KEY field in the following functions indicates the log field (for example, ip) and its value is an IP address.ip_subnet_min, ip_subnet_max, and ip_subnet_range functions, the value of the KEY field is an IP address with a subnet mask (for example, 192.168.1.0/24). If the value field is a general IP address, you need to use the cancat function to convert it to an IP address with a subnet mask.Function | Description | Example |
ip_prefix(KEY,prefix_bits) | Gets the prefix of an IP address. An IP address with a subnet mask is returned, for example, 192.168.1.0/24. | * | SELECT ip_prefix(ip,24) |
ip_subnet_min(KEY) | Gets the smallest IP address in an IP range. The return value is an IP address, for example, 192.168.1.0. | * | SELECT ip_subnet_min(concat(ip,'/24')) |
ip_subnet_max(KEY) | Gets the largest IP address in an IP range. The return value is an IP address, for example, 192.168.1.255. | * | SELECT ip_subnet_max(concat(ip,'/24')) |
ip_subnet_range(KEY) | Gets the range of an IP range. The return value is an IP address of the Array type, for example, [[192.168.1.0, 192.168.1.255]]. | * | SELECT ip_subnet_range(concat(ip,'/24')) |
is_subnet_of | Determines whether an IP address is in a specified IP range. The return value is of the Boolean type. | * | SELECT is_subnet_of('192.168.0.1/24', ip) |
is_prefix_subnet_of | Determines whether an IP range is a subnet of a specified IP range. The return value is of the Boolean type. | * | SELECT is_prefix_subnet_of('192.168.0.1/24',concat(ip, '/24')) |
ip.* | SELECT count(*) AS PV where ip_to_domain(ip)!='intranet'
* | SELECT ip_to_province(ip) AS province, count(*) as PV GROUP BY province ORDER BY PV desc LIMIT 10
* | SELECT ip_to_province(ip) AS province, count(*) as PV where ip_to_domain(ip)!='intranet' GROUP BY province ORDER BY PV desc LIMIT 10
* | SELECT ip_to_geo(ip) AS geo, count(*) AS pv GROUP BY geo ORDER BY pv DESC
Last updated:2024-03-20 11:47:49
[protocol:][//host[:port]][path][?query][#fragment]
: and ?.Function | Description | Example | Output |
url_extract_fragment(url) | Extracts fragment from the URL. The result is of the varchar type. | * | select url_extract_fragment('https://console.intl.cloud.tencent.com/#/project/dashboard-demo/categoryList') | /project/dashboard-demo/categoryList |
url_extract_host(url) | Extracts host from the URL. The result is of the varchar type. | * | select url_extract_host('https://console.intl.cloud.tencent.com/cls') | console.cloud.tencent.com |
url_extract_parameter(url, name) | Extracts the value of query from the URL. The result is of the varchar type. | * | select url_extract_parameter('https://console.intl.cloud.tencent.com/cls?region=ap-chongqing','region') | ap-chongqing |
url_extract_path(url) | Extracts path from the URL. The result is of the varchar type. | * | select url_extract_path('https://console.intl.cloud.tencent.com/cls?region=ap-chongqing') | /cls |
url_extract_port(url) | Extracts port from the URL. The result is of the bigint type. | * | select url_extract_port('https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing') | 80 |
url_extract_protocol(url) | Extracts protocol from the URL. The result is of the varchar type. | * | select url_extract_protocol('https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing') | https |
url_extract_query(url) | Extracts the key of query from the URL. The result is of the varchar type. | * | select url_extract_query('https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing') | region=ap-chongqing |
url_encode(value) | Escapes value so that it can be used in URL_query.Letters will not be decoded. .-*_ will not be encoded. Spaces are decoded as +. Other characters are decoded into the UTF-8 format. | * | select url_encode('https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing') | https%3A%2F%2Fconsole.cloud.tencent.com%3A80%2Fcls%3Fregion%3Dap-chongqing |
url_decode(value) | Decodes the URL. | * | select url_decode('https%3A%2F%2Fconsole.cloud.tencent.com%3A80%2Fcls%3Fregion%3Dap-chongqing') | https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing |
Last updated:2024-03-20 11:47:49
x and y in the following functions can be numbers, log fields, or expressions with numerical calculation results.Function | Description |
abs(x) | Returns the absolute value of x. |
cbrt(x) | Returns the cube root of x. |
sqrt(x) | Returns the square root of x. |
cosine_similarity(x,y) | Returns the cosine similarity between the vectors x and y.For example, * | SELECT cosine_similarity(MAP(ARRAY['x','y'], ARRAY[1.0,0.0]), MAP(ARRAY['x','y'], ARRAY[0.0,1.0])) returns 0. |
degrees(x) | Converts angle x in radians to degrees. |
radians(x) | Converts angle x in degrees to radians. |
e() | Returns the natural logarithm of the number. |
exp(x) | Returns the exponent of the natural logarithm. |
ln(x) | Returns the natural logarithm of x. |
log2(x) | Returns the base-2 logarithm of x. |
log10(x) | Returns the base-10 logarithm of x. |
log(x,b) | Returns the base-b logarithm of x. |
pi() | Returns the value of Pi, accurate to 14 decimal places. |
pow(x,b) | Returns x raised to the power of b. |
rand() | Returns a random value. |
random(0,n) | Returns a random number within the [0,n) range. |
round(x) | Returns the rounded value of x. |
round(x, N) | Returns x rounded to N decimal places. |
floor(x) | Returns x rounded down to the nearest integer. |
ceiling(x) | Returns x rounded up to the nearest integer. |
from_base(varchar, bigint) | Converts a string into a number based on BASE encoding. |
to_base(x, radix) | Converts a number into a string based on BASE encoding. |
truncate(x) | Returns x rounded to an integer by dropping digits after the decimal point. |
acos(x) | Returns the arc cosine of x. |
asin(x) | Returns the arc sine of x. |
atan(x) | Returns the arc tangent of x. |
atan2(y,x) | Returns the arc tangent of the result of dividing y by x. |
cos(x) | Returns the cosine of x. |
sin(x) | Returns the sine of x. |
cosh(x) | Returns the hyperbolic cosine of x. |
tan(x) | Returns the tangent of x. |
tanh(x) | Returns the hyperbolic tangent of x. |
infinity() | Returns the constant representing positive infinity. |
is_nan(x) | Determines if the target value is Not a Number (NaN). |
nan() | Returns a "Not a Number" (NaN) value. |
mod(x, y) | Returns the remainder when x is divided by y. |
sign(x) | Returns the sign of x represented by 1, 0, or -1. |
width_bucket(x, bound1, bound2, n) | Returns the bucket number of x in an equi-width histogram, with n buckets within bounds of bound1 and bound2. For example, * | select timeCost,width_bucket(timeCost,10,1000,5) |
width_bucket(x, bins) | Returns the bin number of x with specific bins specified by the array bins. For example, * | select timeCost,width_bucket(timeCost,array[10,100,1000]) |
* | SELECT diff [1] AS today, round((diff [3] -1.0) * 100, 2) AS growth FROM (SELECT compare(pv, 86400) as diff FROM (SELECT COUNT(*) as pv FROM log))
Last updated:2024-03-20 11:47:49
Function | Description |
corr(key1, key2) | Returns the correlation coefficient of two columns. The calculation result range is [0,1]. |
covar_pop(key1, key2) | Returns the population covariance of two columns. |
covar_samp(key1, key2) | Returns the sample covariance of two columns. |
regr_intercept(key1, key2) | Returns linear regression intercept of input values. key1 is the dependent value. key2 is the independent value. |
regr_slope(key1, key2) | Returns linear regression slope of input values. key1 is the dependent value. key2 is the independent value. |
stddev(key) | Returns the sample standard deviation of the key column. This function is equivalent to the stddev_samp function. |
stddev_samp(key) | Returns the sample standard deviation of the key column. |
stddev_pop(key) | Returns the population standard deviation of the key column. |
variance(key) | Returns the sample variance of the key column. This function is equivalent to the var_samp function. |
var_samp(key) | Returns the sample variance of the key column. |
var_pop(key) | Returns the population variance of the key column. |
timeCost (response time) and SamplingRate (sampling rate) columns:* | select corr(timeCost,SamplingRate)
Last updated:2024-03-20 11:47:49
status.Function | Description | Example |
arbitrary(KEY) | Returns an arbitrary non-null value of the KEY column. | * | SELECT arbitrary(request_method) AS request_method |
avg(KEY) | Returns the average (arithmetic mean) of the KEY column. | * | SELECT AVG(request_time) |
bitwise_and_agg(KEY) | Returns the bitwise AND result of all input values of the KEY column. | * | SELECT bitwise_and_agg(status) |
bitwise_or_agg(KEY) | Returns the bitwise OR result of all input values of the KEY column. | * | SELECT bitwise_or_agg(request_length) |
checksum(KEY) | Returns the checksum of the KEY column. The return result is of Base64 encoding type. | * | SELECT checksum(request_method) AS request_method |
count(*) | Returns the number of input rows. | * | SELECT COUNT(*) WHERE http_status >200 |
count(1) | Returns the number of input rows. This function is equivalent to count(*). | * | SELECT COUNT(1) |
count(KEY) | Returns the number of non-null input values of the KEY column. | * | SELECT COUNT(request_time) WHERE request_time >5.0 |
count_if(boolean) | Returns the number of logs that meet specified conditions. | * | select count_if(returnCode>=400) as errorCounts |
geometric_mean(KEY) | Returns the geometric mean of KEY, which cannot contain negative numbers; otherwise, the result will be NaN. | * | SELECT geometric_mean(request_time) AS request_time |
max(KEY) | Returns the maximum value of KEY. | * | SELECT MAX(request_time) AS max_request_time |
max_by(x,y) | Returns the value of x associated with the maximum value of y over all input values. | * | SELECT MAX_BY(request_method, request_time) AS method |
max_by(x,y,n) | Returns n values of x associated with the n largest of all input values of y in descending order of y. | * | SELECT max_by(request_method, request_time, 3) AS method |
min(KEY) | Returns the minimum value of KEY. | * | SELECT MIN(request_time) AS min_request_time |
min_by(x,y) | Returns the value of x associated with the minimum value of y over all input values. | * | SELECT min_by(request_method, request_time) AS method |
min_by(x,y,n) | Returns n values of x associated with the n smallest of all input values of y in descending order of y. | * | SELECT min_by(request_method, request_time, 3) AS method |
sum(KEY) | Returns the sum of the KEY column. | * | SELECT SUM(body_bytes_sent) AS sum_bytes |
bool_and(boolean) | Returns TRUE if all logs meet the specified condition or FALSE otherwise. | * | select bool_and(returnCode>=400) |
bool_or(boolean) | Returns TRUE if any log meets the specified condition or FALSE otherwise. | * | select bool_or(returnCode>=400) |
every(boolean) | Equivalent to bool_and(boolean). | * | select every(returnCode>=400) |
Parameter | Description |
KEY | Name of the log field. |
x | The parameter value can be of any data type. |
y | The parameter value can be of any data type. |
n | An integer greater than 0. |
Last updated:2024-03-20 11:47:49
Geometry | WKT Format |
Point | POINT (0 0) |
LineString | LINESTRING (0 0, 1 1, 1 2) |
Polygon | POLYGON ((0 0, 4 0, 4 4, 0 4, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1)) |
MultiPoint | MULTIPOINT (0 0, 1 2) |
MultiLineString | MULTILINESTRING ((0 0, 1 1, 1 2), (2 3, 3 2, 5 4)) |
MultiPolygon | MULTIPOLYGON (((0 0, 4 0, 4 4, 0 4, 0 0), (1 1, 2 1, 2 2, 1 2, 1 1)), ((-1 -1, -1 -2, -2 -2, -2 -1, -1 -1))) |
GeometryCollection | GEOMETRYCOLLECTION (POINT(2 3), LINESTRING (2 3, 3 4)) |
to_spherical_geography() to convert a plane geometry into a spherical geometry.
For example:
ST_Distance(ST_Point(-71.0882, 42.3607), ST_Point(-74.1197, 40.6976)) calculates the distance between two points on a plane, and the result is 3.4577.
ST_Distance(to_spherical_geography(ST_Point(-71.0882, 42.3607)), to_spherical_geography(ST_Point(-74.1197, 40.6976))) calculates the distance between two points on a sphere, and the result is 312822.179.ST_Distance() and ST_Length()), the unit is meter. When an area is calculated (for example, ST_Area()), the unit is square meter.Function | Return Value Type | Description |
ST_Point(double, double) | Point | Constructs a point. |
ST_LineFromText(varchar) | LineString | Constructs a LineString based on WKT text. |
ST_Polygon(varchar) | Polygon | Constructs a polygon based on WKT text. |
ST_GeometryFromText(varchar) | Geometry | Constructs a geometry based on WKT text. |
ST_GeomFromBinary(varbinary) | Geometry | Constructs a geometry based on WKB representation. |
ST_AsText(Geometry) | varchar | Converts a geometry into WKT format. |
to_spherical_geography(Geometry) | SphericalGeography | Converts a plane geometry into a spherical geometry. |
to_geometry(SphericalGeography) | Geometry | Converts a spherical geometry into a plane geometry. |
Function | Return Value Type | Description |
ST_Contains(Geometry, Geometry) | boolean | Returns true if and only if no points of the second geometry lie in the exterior of the first geometry, and at least one point of the interior of the first geometry lies in the interior of the second geometry. Returns false if the second geometry lies exactly on the boundary of the first geometry. |
ST_Crosses(Geometry, Geometry) | boolean | Returns true if the given geometries have some, but not all, interior points in common. |
ST_Disjoint(Geometry, Geometry) | boolean | Returns true if the given geometries do not spatially intersect. |
ST_Equals(Geometry, Geometry) | boolean | Returns true if the given geometries represent the same geometry. |
ST_Intersects(Geometry, Geometry) | boolean | Returns true if the given geometries spatially intersect in two dimensions. |
ST_Overlaps(Geometry, Geometry) | boolean | Returns true if the given geometries are of the same dimension, but are not completely contained by each other. |
ST_Relate(Geometry, Geometry) | boolean | Returns true if the first geometry is spatially related to the second geometry. |
ST_Touches(Geometry, Geometry) | boolean | Returns true if a geometry spatially touches another geometry, but their interiors do not intersect. |
ST_Within(Geometry, Geometry) | boolean | Returns true if first geometry is completely inside the second geometry. Return false if their boundaries intersect. |
Function | Return Value Type | Description |
geometry_nearest_points(Geometry, Geometry) | row(Point, Point) | Returns the two closest points between two geometries. |
geometry_union(array(Geometry)) | Geometry | Combines multiple geometries into one. |
ST_Boundary(Geometry) | Geometry | Returns the boundary (closure) of a geometry. |
ST_Buffer(Geometry, distance) | Geometry | Returns a geometric object that represents the union of all points whose distance from a geometry is less than or equal to a specified value. |
ST_Difference(Geometry, Geometry) | Geometry | Returns the geometry value that represents the point set difference of the two given geometries. |
ST_Envelope(Geometry) | Geometry | Returns the bounding rectangular polygon of the geometry. |
ST_ExteriorRing(Geometry) | Geometry | Returns the exterior ring of the input polygon. |
ST_Intersection(Geometry, Geometry) | Geometry | Returns the geometry value that represents the point set intersection of two given geometries. |
ST_SymDifference(Geometry, Geometry) | Geometry | Returns the geometry value that represents the point set symmetric difference of two geometries. |
Function | Return Value Type | Description |
ST_Area(Geometry) | double | Returns the area of a polygon in a plane geometry. |
ST_Area(SphericalGeography) | double | Returns the area of a polygon in a spherical geometry. |
ST_Centroid(Geometry) | Geometry | Returns the point value that is the mathematical centroid of a geometry. |
ST_CoordDim(Geometry) | bigint | Returns the coordinate dimension of the geometry. |
ST_Dimension(Geometry) | bigint | Returns the inherent dimension of this geometry, which must be less than or equal to the coordinate dimension. |
ST_Distance(Geometry, Geometry) | double | Returns the minimum distance between two geometries. |
ST_Distance(SphericalGeography, SphericalGeography) | double | Returns the smallest distance between two spherical geographies. |
ST_IsClosed(Geometry) | boolean | Returns true if the start and end points of the given geometry are the same. |
ST_IsEmpty(Geometry) | boolean | Returns true if the geometry is an empty GeometryCollection, polygon, point etc. |
ST_IsRing(Geometry) | boolean | Returns true if and only if the geometry is a closed and simple line. |
ST_Length(Geometry) | double | Returns the length of a LineString or MultiLineString on a plane geometry. |
ST_Length(SphericalGeography) | double | Returns the length of a LineString or MultiLineString on a spherical geometry. |
ST_XMax(Geometry) | double | Returns the X maxima of a bounding box of a geometry. |
ST_YMax(Geometry) | double | Returns the Y maxima of a bounding box of a geometry. |
ST_XMin(Geometry) | double | Returns the X minima of a bounding box of a geometry. |
ST_YMin(Geometry) | double | Returns the Y minima of a bounding box of a geometry. |
ST_StartPoint(Geometry) | point | Returns the first point of a LineString geometry. |
ST_EndPoint(Geometry) | point | Returns the last point of a LineString geometry. |
ST_X(Point) | double | Returns the X coordinate of the point. |
ST_Y(Point) | double | Returns the Y coordinate of the point. |
ST_NumPoints(Geometry) | bigint | Returns the number of points in a geometry. |
ST_NumInteriorRing(Geometry) | bigint | Returns the number of the interior rings of a polygon. |
Last updated:2024-03-20 11:47:49
Statement | Description |
Concatenation function || | The result of a || b is ab. |
length(binary) → bigint | Returns a binary length. |
concat(binary1, …, binaryN) → varbinary | Concatenates binary strings. This function provides the same functionality as ||. |
to_base64(binary) → varchar | Converts a binary string into a base64 string. |
from_base64(string) → varbinary | Converts a base64 string into a binary string. |
to_base64url(binary) → varchar | Converts a binary string into a base64 string with a URL safe alphabet. |
from_base64url(string) → varbinary | Converts a base64 string with a URL safe alphabet into a binary string. |
to_hex(binary) → varchar | Converts a binary string into a hexadecimal string. |
from_hex(string) → varbinary | Converts a hexadecimal string into a binary string. |
to_big_endian_64(bigint) → varbinary | Converts a bigint number into a big endian binary string. |
from_big_endian_64(binary) → bigint | Converts a big endian binary string into a number. |
md5(binary) → varbinary | Computes the MD5 hash of a binary string. |
sha1(binary) → varbinary | Computes the SHA1 hash of a binary string. |
sha256(binary) → varbinary | Computes the SHA256 hash of a binary string. |
sha512(binary) → varbinary | Computes the SHA512 hash of a binary string. |
xxhash64(binary) → varbinary | Computes the xxHash64 hash of a binary string. |
Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
approx_distinct | approx_distinct(x) | Returns the approximate number of distinct input values (column x). |
approx_percentile | approx_percentile(x,percentage) | Sorts the values in the x column in ascending order and returns the value approximately at the given `percentage` position. |
| approx_percentile(x,array[percentage01, percentage02...]) | Sorts the values in the x column in ascending order and returns the values approximately at the given `percentage` positions (percentage01, percentage02...). |
approx_distinct function is used to get the approximate number of distinct input values of a field. The standard result deviation is 2.3%.approx_distinct(x)
Parameter | Description |
x | The parameter value can be of any data type. |
count function to calculate the PV value and use the approx_distinct function to get the approximate number of distinct input values of the client_ip field and use it as the UV value.* | SELECT count(*) AS PV, approx_distinct(ip) AS UV
approx_percentile function is used to sort values of the target field in ascending order and return the value in the position around percentage. It uses the T-Digest algorithm for estimation, which has a low deviation and can meet the most statistical analysis requirements. If needed, you can use * | select count_if(x<(select approx_percentile(x,percentage))),count(*) to accurately count the number of field values below percentage and the total number of field values respectively and then verify the statistical deviation.percentage positionapprox_percentile(x, percentage)
percentage positions (percentage01,percentage02...)approx_percentile(x, array[percentage01,percentage02...])
Parameter | Description |
x | Value type: double |
percentage | Value range: [0,1] |
* | select approx_percentile(resTotalTime,0.5)
* | select approx_percentile(resTotalTime, array[0.2,0.4,0.6])
Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
cast | cast(x as type) | Parses the data type of x.During cast execution, if a value fails to be parsed, the system terminates the entire query and analysis operation. |
try_cast | try_cast(x as type) | Parses the data type of x.During try_cast execution, if a value fails to be parsed, the system returns NULL and continues processing by skipping the value. |
typeof | typeof(x) | Returns the data type of x. |
try_cast function to avoid query and analysis failures caused by dirty data.cast function is used to parse the data type of x. During cast execution, if a value fails to be parsed, the system terminates the entire query and analysis operation.cast(x as type)
Parameter | Description |
x | The parameter value can be of any type. |
type | SQL data type. Valid values: bigint, varchar, double, boolean, timestamp, decimal, array, or map. |
type is timestamp, x must be a timestamp in milliseconds (such as 1597807109000) or a time string in the ISO 8601 time format (such as 2019-12-25T16:17:01+08:00).type parameter.* | select cast(0.01 as bigint)
__TIMESTAMP__ to TIMESTAMP.* | select cast(TIMESTAMP as timestamp)
try_cast function is used to parse the data type of x. During try_cast execution, if a value fails to be parsed, the system returns NULL and continues processing by skipping the value.try_cast(x as type)
Parameter | Description |
x | The parameter value can be of any type. |
type | SQL data type. Valid values: bigint, varchar, double, boolean, timestamp, decimal, array, or map. |
type parameter.* | select try_cast(remote_user as varchar)
Index Data Type | SQL Data Type |
long | bigint |
text | varchar |
double | double |
json | varchar |
Last updated:2024-01-22 10:52:48
Operator | Description | Example |
AND | The result is TRUE only if both the left and right operands are TRUE. | a AND b |
OR | The result is TRUE if either of the left and right operands is TRUE. | a OR b |
NOT | The result is FALSE only if the right operand is FALSE. | NOT a |
a and b are TRUE, FALSE, and NULL:a | b | a AND b | a OR b |
TRUE | TRUE | TRUE | TRUE |
TRUE | FALSE | FALSE | TRUE |
TRUE | NULL | NULL | TRUE |
FALSE | TRUE | FALSE | TRUE |
FALSE | FALSE | FALSE | FALSE |
FALSE | NULL | FALSE | NULL |
NULL | TRUE | NULL | TRUE |
NULL | FALSE | FALSE | NULL |
NULL | NULL | NULL | NULL |
a | NOT a |
TRUE | FALSE |
FALSE | TRUE |
NULL | NULL |
Last updated:2024-01-22 10:52:48
a holds 1 and variable b holds 2, then:Operator | Description | Example |
+ (Addition) | Adds values on either side of the operator. | a + b |
- (Subtraction) | Subtracts the right hand operand from the left hand operand. | a - b |
* (Multiplication) | Multiplies values on either side of the operator. | a * b |
/ (Division) | Divides the left hand operand by the right hand operand. | b / a |
% (Modulus) | Divides the left hand operand by the right hand operand and returns the remainder. | b % a |
a holds 1 and variable b holds 2, then:Operator | Description | Example |
= | Checks if the values of two operands are equal or not. If yes, the condition is true. | a = b |
!= | Checks if the values of two operands are equal or not. If no, the condition is true. | a != b |
<> | Checks if the values of two operands are equal or not. If no, the condition is true. | a <> b |
> | Checks if the value of the left operand is greater than the value of the right operand. If yes, the condition is true. | a > b |
< | Checks if the value of the left operand is less than the value of the right operand. If yes, the condition is true. | a < b |
>= | Checks if the value of the left operand is greater than or equal to the value of the right operand. If yes, the condition is true. | a >= b |
<= | Checks if the value of the left operand is less than or equal to the value of the right operand. If yes, the condition is true. | a <= b |
IN | The IN operator is used to compare a value with a specified list of values. | status IN (200,206,404) |
NOT IN | The NOT IN operator is used to compare a value with values that are not in a specified list. It is the opposite of the IN operator. | status NOT IN (200,206,404) |
BETWEEN AND | The BETWEEN operator tests if a value is within a specified range (BETWEEN min AND max). | status between 200 AND 400 |
LIKE | The LIKE operator is used to compare a value with a similar value using the wildcard operator. The percent sign (%) represents zero, one, or multiple characters. The underscore (_) represents a single digit or character. | url LIKE '%.mp4' |
IS NULL | The NULL operator compares a value with NULL. If the value is null, the condition is true. | status IS NULL |
IS NOT NULL | The NULL operator compares a value with NULL. If the value is not null, the condition is true. | status IS NOT NULL |
DISTINCT | Syntax: x IS DISTINCT FROM y or x IS NOT DISTINCT FROM y.The DISTINCT operator checks if x equals to y. Unlike <>, it can compare nulls. For more information, see Differences between <> and DISTINCT. | NULL IS NOT DISTINCT FROM NULL |
LEAST | Syntax: LEAST(x, y...).Returns the minimum value among x,y... | LEAST(1,2,3) |
GREATEST | Syntax: GREATEST(x, y...). Returns the maximum value among x,y... | GREATEST(1,2,3) |
ALL | Syntax: x expression operator ALL ( subquery ) Returns true if x meets all conditions. Supported operators are <, >, <=, >=, =, <>, !=. | Example 1: 21 < ALL (VALUES 19, 20, 21) Example 2: * | SELECT 200 = ALL(SELECT status) |
ANY / SOME | Syntax: x expression operator ANY ( subquery ) or x expression operator SOME ( subquery ). Returns true if x meets any condition. Supported operators are <, >, <=, >=, =, <>, !=. | Example 1: 'hello' = ANY (VALUES 'hello', 'world') Example 2: * | SELECT 200 = ANY(SELECT status) |
x | y | x = y | x <> y | x IS DISTINCT FROM y | x IS NOT DISTINCT FROM y |
1 | 1 | true | false | false | true |
1 | 2 | false | true | true | false |
1 | null | null | null | true | false |
null | null | null | null | false | true |
Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
bit_count(x, bits) | Returns the number of ones in x in binary representation. | |
bitwise_and(x, y) | Returns the result of the bitwise AND operation on x and y in binary representation. | |
bitwise_not(x) | Inverts all bits of x in binary representation. | |
bitwise_or(x, y) | Returns the result of the bitwise OR operation on x and y in binary representation. | |
bitwise_xor(x, y) | Returns the result of the bitwise XOR operation on x and y in binary representation. |
bit_count function is used to return the number of ones in x.bit_count(x, bits)
Parameter | Description |
x | The parameter value is of the bigint type. |
bits | Number of bits, for example, 64 bits. |
* | SELECT bit_count(24, 64)
2
bitwise_and function is used to return the result of the bitwise AND operation on x and y in binary representation.bitwise_and(x, y)
Parameter | Description |
x | The parameter value is of the bigint type. |
y | The parameter value is of the bigint type. |
* | SELECT bitwise_and(3, 5)
1
bitwise_not function is used to invert all bits of x in binary representation.bitwise_not(x)
Parameter | Description |
x | The parameter value is of the bigint type. |
* | SELECT bitwise_not(4)
-5
bitwise_or function is used to return the result of the bitwise OR operation on x and y in binary representation.bitwise_or(x, y)
Parameter | Description |
x | The parameter value is of the bigint type. |
y | The parameter value is of the bigint type. |
* | SELECT bitwise_or(3, 5)
7
bitwise_xor function is used to return the result of the bitwise XOR operation on x and y in binary representation.bitwise_xor(x, y)
Parameter | Description |
x | The parameter value is of the bigint type. |
y | The parameter value is of the bigint type. |
* | SELECT bitwise_xor(3, 5)
6
Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
regexp_extract_all(x, regular expression) | Extracts the substrings that match a specified regular expression from a specified string and returns a collection of all matched substrings. | |
| regexp_extract_all(x, regular expression, n) | Extracts the substrings that match a specified regular expression from a specified string and returns a collection of substrings that match the target capture group. |
regexp_extract(x, regular expression) | Extracts and returns the first substring that matches a specified regular expression from a specified string. | |
| regexp_extract(x, regular expression, n) | Extracts the substrings that match a specified regular expression from a specified string and returns the first substring that matches the target capture group. |
regexp_like(x, regular expression) | Checks whether a specified string matches a specified regular expression. | |
regexp_replace(x, regular expression) | Deletes the substrings that match a specified regular expression from a specified string and returns the substrings that are not deleted. | |
| regexp_replace(x, regular expression, replace string) | Replaces the substrings that match a specified regular expression in a specified string and returns the new string after the replacement. |
regexp_split(x, regular expression) | Splits a specified string into multiple substrings by using a specified regular expression and returns a collection of the substrings. | |
regexp_extract_all function is used to extract the substrings that match a specified regular expression from a specified string.regexp_extract_all(x, regular expression)
regexp_extract_all(x, regular expression, n)
Parameter | Description |
x | The parameter value is of the varchar type. |
regular expression | The regular expression that contains capture groups. For example, (\d)(\d)(\d) indicates three capture groups. |
n | The nth capture group. n is an integer that starts from 1. |
http_protocol field.* | SELECT regexp_extract_all(http_protocol, '\d+')
[1,1]
regexp_extract function is used to extract the first substring that matches a specified regular expression from a specified string.regexp_extract(x, regular expression)
regexp_extract(x, regular expression, n)
Parameter | Description |
x | The parameter value is of the varchar type. |
regular expression | The regular expression that contains capture groups. For example, (\d)(\d)(\d) indicates three capture groups. |
n | The nth capture group. n is an integer that starts from 1. |
http_protocol field* | SELECT regexp_extract_all(http_protocol, '\d+')
1
request_uri field and count the number of times each file is accessed* | select regexp_like(server_protocol, '\d+')
regexp_like function is used to check whether a specified string matches a specified regular expression.regexp_like (x, regular expression)
Parameter | Description |
x | The parameter value is of the varchar type. |
regular expression | Regular expression. |
server_protocol field contains digits.* | select regexp_like(server_protocol, '\d+')
TRUE
regexp_replace function is used to delete or replace the substrings that match a specified regular expression in a specified string.regexp_replace (x, regular expression)
regexp_replace (x, regular expression, replace string)
Parameter | Description |
x | The parameter value is of the varchar type. |
regular expression | Regular expression. |
replace string | Substring that is used to replace the matched substring. |
server_protocol field and calculate the number of requests for each communication protocol.* | select regexp_replace(server_protocol, '.\d+') AS server_protocol, count(*) AS count GROUP BY server_protocol
server_protocol | count |
HTTP | 357 |
regexp_split function is used to split a specified string into multiple substrings and return a collection of the substrings.regexp_split (x, regular expression)
Parameter | Description |
x | The parameter value is of the varchar type. |
regular expression | Regular expression. |
server_protocol field with forward slashes (/).* | select regexp_split(server_protocol, '/')
["HTTP","1.1"]
Last updated:2024-01-22 10:52:48
parameter -> expression
Parameter | Description |
parameter | Identifier used to pass the parameter. |
expression | Expression. Most MySQL expressions can be used in Lambda expressions, such as:
|
* | SELECT filter(array[5, null, 7, null], x -> x is not null)
[5,7]
* | SELECT reduce(array[5, 20, 50], 0, (s, x) -> s + x, s -> s)
75
* | SELECT map_filter(map(array['class01', 'class02', 'class03'], array[11, 10, 9]), (k,v) -> v > 10)
{"class01":11}
* | SELECT transform(array[5, NULL, 6], x -> coalesce(x, 0) + 1)
[6,1,7]
* | SELECT filter(array[], x -> true)* | SELECT map_filter(map(array[],array[]), (k, v) -> true)* | SELECT reduce(array[5, 6, 10, 20], -- calculates arithmetic average: 10.25cast(row(0.0, 0) AS row(sum double, count integer)),(s, x) -> cast(row(x + s.sum, s.count + 1) AS row(sum double, count integer)),s -> if(s.count = 0, null, s.sum / s.count))* | SELECT reduce(array[2147483647, 1], cast(0 AS bigint), (s, x) -> s + x, s -> s)* | SELECT reduce(array[5, 20, null, 50], 0, (s, x) -> s + x, s -> s)* | SELECT transform(array[array[1, null, 2], array[3, null]], a -> filter(a, x -> x is not null))
Last updated:2024-01-22 10:52:48
Expression | Syntax | Description |
CASE WHEN condition1 THEN result1 [WHEN condition2 THEN result2] [ELSE result3] END | Classifies data according to specified conditions. | |
IF(condition, result1) | If `condition` is `true`, returns `result1`. Otherwise, returns `null`. | |
| IF(condition, result1, result2) | If `condition` is `true`, returns `result1`. Otherwise, returns `result2`. |
NULLIF(expression1, expression2) | Determines whether the values of two expressions are equal. If the values are equal, returns `null`. Otherwise, returns the value of the first expression. | |
TRY(expression) | Captures exception information to enable the system to continue query and analysis operations. | |
COALESCE(expression1, expression2...) | Gets the first non-null value in multiple expressions. | |
CASE WHEN expression is used to classify data.CASE WHEN condition1 THEN result1[WHEN condition2 THEN result2][ELSE result3]END
Parameter | Description |
condition | Conditional expression |
result | Return result |
http_user_agent field, classify the information into the Chrome, Safari, and unknown types, and calculate the PVs of the three types.* | SELECT CASE WHEN http_user_agent like '%Chrome%' then 'Chrome' WHEN http_user_agent like '%Safari%' then 'Safari' ELSE 'unknown' END AS http_user_agent, count(*) AS pv GROUP BY http_user_agent
* | SELECT CASE WHEN request_time < 0.001 then 't0.001' WHEN request_time < 0.01 then 't0.01' WHEN request_time < 0.1 then 't0.1' WHEN request_time < 1 then 't1' ELSE 'overtime' END AS request_time, count(*) AS pv GROUP BY request_time
IF expression is used to classify data. It is similar to the CASE WHEN expression.condition is true, return result1. Otherwise, return null.IF(condition, result1)
condition is true, return result1. Otherwise, return result2.IF(condition, result1, result2)
Parameter | Description |
condition | Conditional expression |
result | Return result |
* | SELECT sum(IF(status = 200, 1, 0)) * 1.0 / count(*) AS status_200_percentag
NULLIF expression is used to determine whether the values of two expressions are equal. If the values are equal, return null. Otherwise, return the value of the first expression.NULLIF(expression1, expression2)
Parameter | Description |
expression | Any valid scalar expression |
server_addr and http_host fields are the same. If the values are different, return the value of the server_addr.* | SELECT NULLIF(server_addr,http_host)
TRY expression is used to capture exception information to enable the system to continue query and analysis operations.TRY(expression)
Parameter | Description |
expression | Expression of any type |
regexp_extract function execution, the TRY expression captures the exception information, continues the query and analysis operation, and returns the query and analysis result.* | SELECT TRY(regexp_extract(uri, './(index.)', 1)) AS file, count(*) AS count GROUP BY file
COALESCE expression is used to get the first non-null value in multiple expressions.COALESCE(expression1, expression2...)
Parameter | Description |
expression | Any valid scalar expression |
* | select COALESCE(null, 'test')
Last updated:2024-01-22 10:57:48
Function | Syntax | Description |
[x] | Returns an element from an array. Equivalent to the `element_at` function. | |
array_agg(x) | Returns all values in `x` as an array. | |
array_distinct(x) | Deduplicates an array and returns unique values from the array. | |
array_except(x, y) | Returns the difference between the `x` and `y` arrays. | |
array_intersect(x, y) | Returns the intersection between the `x` and `y` arrays. | |
array_join(x, delimiter) | Concatenates the elements in an array using the specified delimiter. If the array contains a null element, the null element is ignored.Note: For the `array_join` function, the maximum return result supported is 1 KB, and data exceeding 1 KB will be truncated. | |
| array_join(x, delimiter, null_replacement) | Concatenates the elements in an array using `delimiter` and uses `null_replacement` to replace null values.Note: For the `array_join` function, the maximum return result supported is 1 KB, and data exceeding 1 KB will be truncated. |
array_max(x) | Returns the maximum value of an array. | |
array_max(x) | Returns the minimum value of an array. | |
array_position(x, element) | Returns the subscript (starting from 1) of a specified element. If the specified element does not exist, return `0`. | |
array_remove(x, element) | Removes a specified element from an array. | |
array_sort(x) | Sorts elements in an array in ascending order. If there are null elements, the null elements will be placed at the end of the returned array. | |
array_union(x, y) | Returns the union of two arrays. | |
cardinality(x) | Calculates the number of elements in an array. | |
concat(x, y...) | Concatenates multiple arrays into one. | |
contains(x, element) | Determines whether an array contains a specified element and returns `true` if the array contains the element. | |
element_at(x, y) | Returns the yth element of an array. | |
filter(x, lambda_expression) | Filters elements in an array and returns only the elements that comply with the Lambda expression. | |
flatten(x) | Converts a two-dimensional array to a one-dimensional array. | |
reduce(x, lambda_expression) | Adds the elements in the array as defined by the Lambda expression and returns the result. | |
reverse(x) | Reverses the elements in an array. | |
sequence(x, y) | Returns an array of consecutive and increasing values within the specified starting value range. The increment interval is the default value 1. | |
| sequence(x, y, step) | Returns an array of consecutive and increasing values within the specified starting value range. The increment interval is `step`. |
shuffle(x) | Randomizes the elements in an array. | |
slice(x, start, length) | Returns a subset of an array. | |
transform(x, lambda_expression) | Applies a Lambda expression to each element of an array. | |
zip(x, y) | Combines multiple arrays into a two-dimensional array (elements with the same subscript in each original array form a new array). | |
zip_with(x, y, lambda_expression) | Merges two arrays into one as defined by a Lambda expression. | |
element_at function.[x]
Parameter | Description |
x | Array subscript, starting from 1. The parameter value is of the bigint type. |
number field value.array:[12,23,26,48,26]
* | SELECT cast(json_parse(array) as array(bigint)) [2]
23
array_agg function is used to return all values in x as an array.array_agg(x)
Parameter | Description |
x | The parameter value can be of any data type. |
status field as an array.* | SELECT array_agg(status) AS array
[200,200,200,404,200,200]
array_distinct function is used to delete duplicate elements from an array.array_distinct(x)
Parameter | Description |
x | The parameter value is of the array type. |
array field.array:[12,23,26,48,26]
* | SELECT array_distinct(cast(json_parse(array) as array(bigint)))
[12,23,26,48]
array_except function is used to calculate the difference between two arrays.array_except(x, y)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
* | SELECT array_except(array[1,2,3,4,5],array[1,3,5,7])
[2,4]
array_intersect function is used to calculate the intersection between two arrays.array_intersect(x, y)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
* | SELECT array_intersect(array[1,2,3,4,5],array[1,3,5,7])
[1,3,5]
array_join function is used to concatenate the elements in an array into a string using the specified delimiter.array_join(x, delimiter)
null_replacement.array_join(x, delimiter, null_replacement)
Parameter | Description |
x | The parameter value is of the array type. |
delimiter | Connector, which can be a string. |
null_replacement | String used to replace a null element. |
region.* | SELECT array_join(array[null,'China','sh'],'/','region')
region/China/sh
array_max function is used to get the maximum value of an array.array_max(x)
Parameter | Description |
x | The parameter value is of the array type. |
array:[12,23,26,48,26]
* | SELECT array_max(try_cast(json_parse(array) as array(bigint))) AS max_number
48
array_min function is used to get the minimum value of an array.array_min(x)
Parameter | Description |
x | The parameter value is of the array type. |
array:[12,23,26,48,26]
* | SELECT array_min(try_cast(json_parse(array) as array(bigint))) AS min_number
12
array_position function is used to get the subscript (starting from 1) of a specified element. If the specified element does not exist, return 0.array_position(x, element)
Parameter | Description |
x | The parameter value is of the array type. |
element | Element in an array. |
* | SELECT array_position(array[23,46,35],46)
2
array_remove function is used to delete a specified element from an array.array_remove(x, element)
Parameter | Description |
x | The parameter value is of the array type. |
element | Element in an array. |
* | SELECT array_remove(array[23,46,35],23)
[46,35]
array_sort function is used to sort elements in an array in ascending order.array_sort(x)
Parameter | Description |
x | The parameter value is of the array type. |
* | SELECT array_sort(array['b','d',null,'c','a'])
["a","b","c","d",null]
array_union function is used to calculate the union of two arrays.array_union(x, y)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
* | SELECT array_union(array[1,2,3,4,5],array[1,3,5,7])
[1,2,3,4,5,7]
cardinality function is used to calculate the number of elements in an array.cardinality(x)
Parameter | Description |
x | The parameter value is of the array type. |
number field value.array:[12,23,26,48,26]
* | SELECT cardinality(cast(json_parse(array) as array(bigint)))
5
concat function is used to concatenate multiple arrays into one.concat(x, y...)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
* | SELECT concat(array['red','blue'],array['yellow','green'])
["red","blue","yellow","green"]
contains function is used to determine whether an array contains a specified element and return true if the array contains the element.contains(x, element)
Parameter | Description |
x | The parameter value is of the array type. |
element | Element in an array. |
array field value contains 23.array:[12,23,26,48,26]
* | SELECT contains(cast(json_parse(array) as array(varchar)),'23')
TRUE
element_at function is used to return the yth element in an array.element_at(x, y)
Parameter | Description |
x | The parameter value is of the array type. |
element | Array subscript, starting from 1. The parameter value is of the bigint type. |
number field value.array:[12,23,26,48,26]
* | SELECT element_at(cast(json_parse(number) AS array(varchar)), 2)
23
filter function is used to filter elements in an array and return only the elements that comply with a specified Lambda expressionfilter(x, lambda_expression)
Parameter | Description |
x | The parameter value is of the array type. |
lambda_expression |
x -> x > 0 is the Lambda expression.* | SELECT filter(array[5,-6,null,7],x -> x > 0)
[5,7]
flatten function is used to convert a two-dimensional array to a one-dimensional array.flatten(x)
Parameter | Description |
x | The parameter value is of the array type. |
* | SELECT flatten(array[array[1,2,3,4],array[4,3,2,1]])
[1,2,3,4,4,3,2,1]
reduce function is used to add the elements in an array as defined by the Lambda expression and return the result.reduce(x, lambda_expression)
Parameter | Description |
x | The parameter value is of the array type. |
lambda_expression |
* | SELECT reduce(array[5,20,50],0,(s, x) -> s + x, s -> s)
75
reverse function is used to reverse the elements in an array.reverse(x)
Parameter | Description |
x | The parameter value is of the array type. |
* | SELECT reverse(array[1,2,3,4,5])
[5,4,3,2,1]
sequence function is used to return an array of consecutive and increasing values within the specified starting value range.1.sequence(x, y)
sequence(x, y, step)
Parameter | Description |
x | The parameter value is of the bigint or timestamp type (UNIX timestamp or date and time expression). |
y | The parameter value is of the bigint or timestamp type (UNIX timestamp or date and time expression). |
step | Value interval.If the parameter value is a date and time expression, the format of step is as follows:interval 'n' year to month: the interval is n years.interval 'n' day to second: the interval is n days. |
* | SELECT sequence(0,10,2)
[0,2,4,6,8,10]
* | SELECT sequence(from_unixtime(1508737026),from_unixtime(1628734085),interval '1' year to month )
["2017-10-23 05:37:06.0","2018-10-23 05:37:06.0","2019-10-23 05:37:06.0","2020-10-23 05:37:06.0"]
* | SELECT sequence(1628733298,1628734085,60)
[1628733298,1628733358,1628733418,1628733478,1628733538,1628733598,1628733658,1628733718,1628733778,1628733838,1628733898,1628733958,1628734018,1628734078]
shuffle function is used to randomize the elements in an array.shuffle(x)
Parameter | Description |
x | The parameter value is of the array type. |
* | SELECT shuffle(array[1,2,3,4,5])
[5,2,4,1,3]
slice function is used to return a subset of an array.slice(x, start, length)
Parameter | Description |
x | The parameter value is of the array type. |
start | Index start position. If start is negative, start from the end.If start is positive, start from the beginning. |
length | Number of elements in a subset. |
* | SELECT slice(array[1,2,4,5,6,7,7],3,2)
[4,5]
transform function is used to apply a Lambda expression to each element of an array.transform(x, lambda_expression)
Parameter | Description |
x | The parameter value is of the array type. |
lambda_expression |
* | SELECT transform(array[5,6],x -> x + 1)
[6,7]
zip function is used to combine multiple arrays into a two-dimensional array, and elements with the same subscript in each original array form a new array.zip(x, y)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
* | SELECT zip(array[1,2], array[3,4])
["{1, 3}","{2, 4}"]
zip_with function is used to merge two arrays into one as defined by a Lambda expression.zip_with(x, y, lambda_expression)
Parameter | Description |
x | The parameter value is of the array type. |
y | The parameter value is of the array type. |
lambda_expression |
(x, y) -> x + y to add the elements in arrays [1,2] and [3,4], respectively, and return the sum results as an array.* | SELECT zip_with(array[1,2], array[3,4],(x,y) -> x + y)
[4,6]
Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
compare(x,n) | Compares the calculation result of the current time period with the calculation result of a time period n seconds before. | |
| compare(x,n1,n2,n3...) | Compares the calculation result of the current time period with the calculation results of time periods n1, n2, and n3 seconds before. |
compare function is used to compare the calculation result of the current time period with the calculation result of a time period n seconds before.compare (x, n)
compare (x, n1, n2, n3...)
Parameter | Description |
x | The parameter value is of the double or long type. |
n | Time window. Unit: seconds. Example: 3600 (1 hour), 86400 (1 day), 604800 (1 week), or 31622400 (1 year). |
86400 indicates the current time minus 86400 seconds (1 day).* | SELECT compare(PV, 86400) FROM (SELECT count(*) AS PV)
[1860,1656,1.1231884057971016]
* |SELECT compare[1] AS today, compare[2] AS yesterday, compare[3] AS ratioFROM (SELECT compare(PV, 86400) AS compareFROM (SELECT COUNT(*) AS PV))
86400 indicates the current time minus 86400 seconds (1 day), and current_date indicates the date of the current day.%H:%i:%s, which contains only the hour, minute, and second but does not contain the date. If the time range spans days, data statistics errors will occur.* |select concat(cast(current_date as varchar),' ',time) as time,compare[1] as today,compare[2] as yesterday from (select time,compare(pv, 86400) as compare from (select time_series(__TIMESTAMP__, '5m', '%H:%i:%s', '0') as time, count(*) as pv group by time limit 1000)limit 1000)order by time limit 1000

Last updated:2024-01-22 10:52:48
Function | Syntax | Description |
json_array_contains(x, value) | Determines whether a JSON array contains a given value. | |
json_array_get(x, index) | Returns the element with the specified index in a given JSON array. | |
json_array_length(x) | Returns the number of elements in a given JSON array. If `x` is not a JSON array, `null` will be returned. | |
json_extract(x, json_path) | Extracts a set of JSON values (array or object) from a JSON object or array. | |
json_extract_scalar(x, json_path) | Extracts a set of scalar values (strings, integers, or Boolean values) from a JSON object or array. Similar to the `json_extract` function. | |
json_format(x) | Converts a JSON value into a string value. | |
json_parse(x) | Converts a string value into a JSON value. | |
json_size(x, json_path) | Calculates the number of elements in a JSON object or array. |
json_array_contains function is used to determine whether a JSON array contains a specified value.json_array_contains(x, value)
Parameter | Description |
x | The parameter value is a JSON array. |
value | Value. |
* | SELECT json_array_contains('[1, 2, 3]', 2)
TRUE
json_array_get function is used to get the element with a specified index in a JSON array.json_array_get(x, index)
Parameter | Description |
x | The parameter value is a JSON array. |
index | JSON subscript (index), starting from 0. |
* | SELECT json_array_get('["a", [3, 9], "c"]', 1)
[3,9]
json_array_length function is used to calculate the number of elements in a JSON array. If x is not a JSON array, null will be returned.json_array_length(x)
Parameter | Description |
x | The parameter value is a JSON array. |
apple.message:[{"traceName":"StoreMonitor"},{"topicName":"persistent://apache/pulsar/test-partition-17"},{"producerName":"pulsar-mini-338-36"},{"localAddr":"pulsar://pulsar-mini-broker-5.pulsar-mini-broker.pulsar.svc.cluster.local:6650"},{"sequenceId":826},{"storeTime":1635905306062},{"messageId":"19422-24519"},{"status":"SUCCESS"}]
* | SELECT json_array_length(apple.message)
8
json_extract function is used to extract a set of JSON values (array or object) from a JSON object or array.json_extract(x, json_path)
Parameter | Description |
x | The parameter value is a JSON object or array. |
json_path | Note: JSON syntax requiring array element traversal is not supported, such as the following: $.store.book[*].author, $..book[(@.length-1)], $..book[?(@.price<10)]. |
apple.instant:{"epochSecond":1635905306,"nanoOfSecond":63001000}
* | SELECT json_extract(apple.instant, '$.epochSecond')
1635905306
json_extract_scalar function is used to extract a set of scalar values (strings, integers, or Boolean values) from a JSON object or array.json_extract_scalar(x, json_path)
Parameter | Description |
x | The parameter value is a JSON array. |
json_path | Note: JSON syntax requiring array element traversal is not supported, such as the following: $.store.book[*].author, $..book[(@.length-1)], $..book[?(@.price<10)]. |
apple.instant:{"epochSecond":1635905306,"nanoOfSecond":63001000}
* | SELECT sum(cast(json_extract_scalar(apple.instant,'$.epochSecond') AS bigint) )
1635905306
json_format function is used to convert a JSON value into a string value.json_format(x)
Parameter | Description |
x | The parameter value is of JSON type. |
* | SELECT json_format(json_parse('[1, 2, 3]'))
[1, 2, 3]
json_parse function is used to convert a string value into a JSON value and determine whether it complies with the JSON format.json_parse(x)
Parameter | Description |
x | The parameter value is a string. |
* | SELECT json_parse('[1, 2, 3]')
[1, 2, 3]
json_size function is used to calculate the number of elements in a JSON object or array.json_size(x, json_path)
Parameter | Description |
x | The parameter value is a JSON object or array. |
json_path | JSON path, in the format of $.store.book[0].title. |
* | SELECT json_size(json_parse('[1, 2, 3]'),'$')
3
Last updated:2024-01-22 10:52:48
window_function (expression) OVER ([ PARTITION BY part_key ][ ORDER BY order_key ][ { ROWS | RANGE } BETWEEN frame_start AND frame_end ] )
Parameter | Description |
window_function | Specifies the window value calculation method. Aggregate functions, ranking functions and value functions are supported. |
PARTITION BY | Specifies how a window is partitioned. |
ORDER BY | Specifies how the rows in each window partition are ordered. |
{ ROWS |RANGE } BETWEEN frame_start AND frame_end | Window frames, that is, the data range (rows) used when calculating the value of each row within the window partition. If not specified, it represents all rows within the window partition.
Example: rows between current row and 1 following: The current row and the subsequent row
rows between 1 preceding and current row: The current row and the preceding row
rows between 1 preceding and 1 following: From the preceding row to the subsequent row (a total of three rows)
rows between current row and unbounded following: The current row and all subsequent rows
rows between unbounded preceding and current row: The current row and all preceding rows |
Function | Description |
rank() | Returns the rank of each row in a window partition. Rows that have the same field value are assigned the same rank, and therefore ranks may not be consecutive. For example, if two rows have the same rank of 1, the rank of the next row is 3. |
dense_rank() | Similar to rank(). The difference is that the ranks in this function are consecutive. For example, if two rows have the same rank of 1, the rank of the next row is 2. |
cume_dist() | Returns the cumulative distribution of each value in a window partition, that is, the proportions of rows whose field values are less than or equal to the current field value to the total number of rows in the window partition. |
ntile(n) | Divides the rows for a window partition into n groups. If the number of rows in the partition is not divided evenly into n groups, the remaining values are distributed one per group, starting with the first group. For example, if there are 6 rows of data, and they need to be divided into 4 groups, the numbers of each row of data are: 1, 1, 2, 2, 3, 4. |
percent_rank() | Calculates the percentage ranking of each row in a window partition. The calculation formula is: (r - 1) / (n - 1), where r is the rank value obtained via rank() and n is the total number of rows in the window partition. |
row_number() | Calculates the rank of each row (after ranking based on ranking rules) in a window partition. The ranks are unique and start from 1. |
Function | Description |
first_value(key) | Returns the first value of key of the window partition. |
last_value(key) | Returns the last value of key of the window partition. |
nth_value(key, offset) | Returns the value of key in the row at the specified offset of the window partition. Offsets start from 1 and cannot be 0 or negative. If offset is null or exceeds the number of rows in the window partition, null is returned. |
lead(key[, offset[, default_value]]) | Returns the value of key in the row that is at the specified offset after the current row of the window partition. Offsets start from 0, indicating the current row. offset is 1 by default. If offset is null, null is returned. If the offset row exceeds the window partition, default_value is returned. If default_value is not specified, null is returned.When using this function, you must specify the ranking rule (ORDER BY) within the window partition and cannot use window frames. |
lag(key[, offset[, default_value]]) | Similar to lead(key[, offset[, default_value]]). The only difference is that this function returns the value at offset rows before the current row. |
action indicates the API name, timeCost indicates the API response time, and seqId indicates the request ID.* | select * from (select action,timeCost,seqId,rank() over (partition by action order by timeCost desc) as ranking order by action,ranking,seqId) where ranking<=5 limit 10000
action | timeCost | seqId | ranking |
ModifyXXX | 151 | d75427b3-c562-6d7a-354f-469963aab689 | 1 |
ModifyXXX | 104 | add0d353-1099-2c73-e9c9-19ad02480474 | 2 |
CreateXXX | 1254 | c7d591f0-2da6-292c-8abf-98a0716ff8c6 | 1 |
CreateXXX | 970 | d920cf7a-7e7b-524b-68e9-a957c454c328 | 2 |
CreateXXX | 812 | 16357f6d-33b3-83ea-0ae3-b1a2233d4858 | 3 |
CreateXXX | 795 | 0efdab5e-af5f-4a4a-0618-7961420d17a1 | 4 |
CreateXXX | 724 | fb0481f2-dcfc-9500-cb44-a139b774aceb | 5 |
DescribeXXX | 55242 | 4129dcda-46d7-9213-510e-f58cba29daf5 | 1 |
DescribeXXX | 17413 | e36cdeb0-cbc5-ce2b-dec7-f485818ab6c7 | 2 |
DescribeXXX | 10171 | cd6228f7-4644-ba45-f539-0fce7b09455b | 3 |
DescribeXXX | 9475 | 48b6f6e3-6d08-5a31-cd68-89006a346497 | 4 |
DescribeXXX | 9337 | 940b5398-e2ae-9141-801b-b7f0ca548875 | 5 |
pv indicates the daily application throughput and avg_pv_3 indicates the application throughput after 3-day moving average.* | select avg(pv) over(order by analytic_time rows between 2 preceding and current row) as avg_pv_3,pv,analytic_time from (select histogram( cast(__TIMESTAMP__ as timestamp),interval 1 day) as analytic_time, count(*) as pv group by analytic_time order by analytic_time)

Last updated:2024-01-20 17:25:15
avg and geometric_mean, the sample statistical result can directly represent the true value.count(*), sum, and count_if, the sample statistical result divided by the sample rate can represent the true value. For example, if the sample rate for pv(count(*)) is 1:10,000 and the statistical result is 232, then the true value is about 232 / (1:10000) = 2320000.* | select count(*)/(1.0/10000) as pv,avg(response_time) as response_time_avg
pv calculation: count(*) is to calculate the number of log entries and involves sum calculation. You can divide the sample statistical result by the sample rate to get the true value. Here, (1.0/10000) is the sample rate, and the calculated pv is the estimated true value of pv.response_time_avg calculation: avg(response_time) is to calculate the average of response_time and involves average calculation. The result can be directly used as the estimated true value of response_time_avg.* | select avg(response_time) as x,count(response_time) as n,stddev_samp(response_time) as s
avg(response_time) is within the confidence interval of [541.75,547.18]. The avg(response_time) value obtained by accurate statistical calculation in this case is 545.16, which is within the confidence interval.group by), as the samples will be divided into multiple groups by the specified dimension, and the statistical value will be calculated in each group, the number of samples in a single group will be lower than the total number of samples. This will result in less accurate statistics for groups with a small sample size.
In case of sampling statistical analysis:* | select avg(response_time) as response_time,count(*) as sampleCount,url group by url order by count(*) desc
url | response_time | sampleCount |
/user | 45.23 | 7845 |
/user/list | 78.45 | 6574 |
/user/login | 45.85 | 5235 |
/user/logout | 45.48 | 1245 |
/book/new | 125.78 | 987 |
/book/list | 17.23 | 658 |
/book/col | 10.21 | 23 |
/order | 12.13 | 2 |
/book/col and /order is too low, so the statistical results are less accurate. To get more accurate statistical results for these two URLs, you can increase the overall sample rate or perform statistical analysis on them separately, for example:url:"/book/col" OR url:"/order" | select avg(response_time) as response_time,count(*) as sampleCount,url group by url order by count(*) desc
Last updated:2025-09-25 20:12:48
Parameter Name | Description |
Name | As the table name in CLS SQL, it supports lowercase letters, numbers, and _, and cannot start or end with _, with a length of 3 - 60 characters. The name cannot be duplicated within the region. |
Remarks | Optional, no more than 255 characters. |
Data resource type | TencentDB for MySQL or TDSQL-C for MySQL. |
Region | Select the region where the TencentDB instance is located. |
MySQL Instance | Select the TencentDB instance. |
Account name | Account name used to access MySQL. |
Password | Password used to access MySQL. |
Database name | Name of the MySQL database to be associated. |
Table name | Name of the table in the MySQL database to be associated. |
Access scope | Current log topic only: Only the current log topic can access the MySQL data through SQL. Log topics in the current logset: All log topics within the current log set can access the MySQL data through SQL. |
Parameter Name | Description |
Name | Used as the table name in CLS SQL, supports lowercase letters, numbers, and _, cannot start or end with _, length is 3 to 60 characters, and the name must be unique within the region. |
Remarks | Optional, no more than 255 characters. |
Data resource type | COS (CSV files). |
Bucket Region | Select the region of the COS file. |
COS Bucket | Select the bucket of the COS file. |
File name | Enter the COS file name. |
Compression Format | Currently only supports No compression. |
Parameter Name | Description |
Field Type | Supports text, long, and double. Please choose according to the actual data type. |
Access Scope | Current log topic only: Only the current log topic can access the MySQL data through SQL. Log topics in the current logset: All log topics within the current log set can access the MySQL data through SQL. |
Parameter Name | Description |
Name | As the table name in CLS SQL, it supports lowercase letters, numbers, and _, and cannot start or end with _, with a length of 3 - 60 characters. The name cannot be duplicated within the region. |
Remarks | Optional, no more than 255 characters. |
Data resource type | Self-built MySQL. |
Access mode | Private Address or Public Address. |
Region | When using a private address, select the region where the MySQL instance is located. |
Network | When using a private address, select the VPC where the MySQL instance is located. |
Network service type | When using a private address: If your MySQL needs to be accessed through CLB, select CLB. If your MySQL server can be accessed directly, select CVM. |
Access address | For example, gz-cdb-xxxxx.sql.tencentcdb.com. |
MySQL Port | Database port, e.g., 3306. |
Account name | Account name used to access MySQL. |
Password | Password used to access MySQL. |
Database name | Name of the MySQL database to be associated. |
Table name | Name of the table in the MySQL database to be associated. |
Access scope | Current log topic only: Only the current log topic can access the MySQL data through SQL. Log topics in the current logset: All log topics within the current log set can access the MySQL data through SQL. |
* | select * from log, where from log can be omitted, i.e., * | select *. When querying both log data and external data, it is recommended not to omit it to improve SQL readability.userinfo, the corresponding SQL is * | select * from userinfo.* | select * from log left join userinfo on log.user_id=userinfo.id
| and the time range specified in this query do not apply to external data, only to the log data of the current log topic."status_code": "404""local_time": "2023-06-05 19:59:01""refer": "_","user_id": "15""ip": "66.131.53.125""url": "\"GET /class/111.html HTTP/1.1\""
id | Name | Gender | Age | Email | Phone | Address |
1 | John Doe | Male | 32 | johndoe@example.com | 1234567890 | 123 Main St |
2 | Jane Smith | Female | 28 | janesmith@example.com | 9876543210 | 456 Elm St |
3 | Michael Johnson | Male | 45 | michaeljohnson@example.com | 5551234567 | 789 Oak St |
4 | Sarah Davis | Female | 38 | sarahdavis@example.com | 7894561230 | 321 Pine St |
5 | David Wilson | Male | 51 | davidwilson@example.com | 1237894560 | 654 Maple St |
6 | Emily Anderson | Female | 29 | emilyanderson@example.com | 4567890123 | 987 Cherry St |
7 | Matthew Thompson | Male | 37 | matthewthompson@example.com | 7890123456 | 321 Plum St |
8 | Olivia Martinez | Female | 26 | oliviamartinez@example.com | 2345678901 | 654 Orange St |
9 | Alexander Taylor | Male | 42 | alexandertaylor@example.com | 9012345678 | 987 Grape St |
10 | Emma Clark | Female | 31 | emmaclark@example.com | 3456789012 | 123 Lemon St |
user_id in the log to the user's gender Gender:* | select ip,url,user_id,Name,Gender from log left join userinfo on log.user_id=userinfo.id

* | select count(*) as pv,Gender from (select ip,url,user_id,Name,Gender from log left join userinfo on log.user_id=userinfo.id) group by Gender

Last updated:2025-11-19 20:15:33
Category | Description | Configuration Method |
Full-Text Index | A raw log is split into multiple segments, and indexes are created based on the segments. You can query logs based on keywords (full-text search). For example, entering error means to search for logs that contain the keyword error. | Console: Enable full-text index on the index configuration page. |
Key-Value Index | A raw log is split into multiple segments based on a field (key:value), and indexes are created based on the segments. You can query logs based on key-value (key-value search). For example, entering level:error means to search for logs with a level field whose value contains error. | Console: On the index configuration page, enable key-value index and enter the field name (`key`), such as level. |
Metadata Index | A metadata index is also a key-value index, but the field name is prefixed with __TAG__. Metadata indexes are usually used to classify logs.For example, entering __TAG__.region:"ap-beijing" means to search for logs with a region field whose value is ap-beijing. | Console: On the index configuration page, enable key-value index and enter the metadata field name (`key`), such as __TAG__.region. |
__CONTENT__ field and supports only full-text index configuration. If you need to configure key-value indexes for some content in the log or enable statistics, you need to perform log structuring and use log extraction modes other than full text in a single line or full text in multi lines.Configuration Item | Description |
Full-Text Delimiter | A set of characters that split the raw log into segments. Only English symbols are supported. Default delimiters in the console are @&?|#()='",;:<>[]{}/ \n\t\r. Note: If a segment is too long, an index will be created only for the first 10,000 characters, and the excessive part cannot be found. However, the complete log will be stored. |
Case Sensitivity | Specifies whether log search is case-sensitive.For example, if a log is Error and log search is case-sensitive, the log cannot be matched by error. |
Allow Chinese Characters | This feature can be enabled when logs contain Chinese characters and the Chinese characters need to be searched. For example, if the original text of a log is in Chinese, and this feature is disabled, you cannot query the log by using a Chinese keyword contained in the original text. The query can be successful only if you use the exact raw log text to query the log. However, if you enable this feature, you can query the log by using a Chinese keyword contained in the raw log text. |
10.20.20.10;[2018-07-16 13:12:57];GET /online/sample HTTP/1.1;200
IP: 10.20.20.10request: GET /online/sample HTTP/1.1status: 200time: [2018-07-16 13:12:57]
@&()='",;:<>[]{}/ \n\t\r (including space), all field values in the raw log will be segmented into the following keywords (each line denotes a keyword):10.20.20.10GETonlinesampleHTTP1.12002018-07-16131257
\/online\/login
\ is used to escape the / symbol (this symbol is a reserved symbol of the search syntax and therefore needs to be escaped)./ symbol is a delimiter, so the actual search condition is online OR login. A log containing online or login is considered to meet the search condition."/online/login"
/ symbol does not need to be escaped.login and therefore does not meet the search condition."/online/sample"
online and sample in the exact order as that in the search condition and therefore is considered to meet the search condition.key:value, for example, status:200. If no field name is specified, a full-text search will be performed.Built-in Reserved Field | Description |
__FILENAME__ | Filename for log collection, which can be used to search for logs in a specified file. For example, you can use __FILENAME__:/"var/log/access.log" to search for logs from the /var/log/access.log file. |
__SOURCE__ | Source IP for log collection, which can be used to search for logs of a specified server. For example, you can use __SOURCE__:192.168.10.10 to search for the logs of the server whose IP is 192.168.10.10. |
__HOSTNAME__ | The server name of the log, which can be used to search for logs of a specified server. Only LogListener 2.7.4 or later can collect this field. |
__TIMESTAMP__ | Log timestamp (UNIX timestamp in milliseconds). When a log is searched by time range, the system automatically searches for the log by this time and displays the time as the log time on the console. |
__PKG_LOGID__ | Log ID in a log group. This ID is used for context search. You are not recommended to use this ID alone. |
Configuration Item | Description | Remarks |
Field Name | Note: You can add up to 300 fields for a key-value index of a log topic. | - |
Data Type | Data type of the field. There are three types: text, long, and double. The text type supports fuzzy search by wildcard, while the long and double types support range search.Note: 1. Fields of the long type support a data range of -1E15 to 1E15. Data out of the range may lose certain decimal places or not be matched. In the case of index configuration for a super long numeric field, we recommend that you:store the field as the text type if you don't need to search for it by comparing it with the numeric range.store the field as the double type if you need to do so, which may lose certain decimal places.2. Fields of the double type support a data range of -1.79E+308 to +1.79E+308. If the number of code characters of the floating-point number exceeds 64, decimal places will be lost. | long - Integer type (Int 64) double - Floating point (64 bit) double text - String |
Delimiter | A set of characters that split the field value into segments. Only English symbols are supported. Note: If a segment is too long, an index will be created only for the first 10,000 characters, and the excessive part cannot be found. However, the complete log will be stored. | Default delimiters in the console: @&?|#()='",;:<>[]{}/ \n\t\r |
Allow Chinese Characters | This feature can be enabled when fields contain Chinese characters and the Chinese characters need to be searched. For example, if the original text of a log is in Chinese, and this feature is disabled, you cannot query the log by using a Chinese keyword contained in the original text. The query can be successful only if you use the exact raw log text to query the log. However, if you enable this feature, you can query the log by using a Chinese keyword contained in the raw log text. | - |
Enable Statistics | After it is toggled on, SQL statistical analysis can be performed on the field, such as group by ${key} and sum(${key}).Note: If it is toggled on for a field of the `text` type and the value is too long, only the first 32,766 characters will be included in the statistical calculation (SQL). If the field contains Chinese characters, the log will be lost if the value contains more than 32,766 characters. We recommend that you toggle the feature off in this case. | This feature is part of the key-value index feature and therefore is not billed separately. |
Case Sensitivity | Specifies whether log search is case-sensitive.For example, if a log is level:Error and log search is case-sensitive, the log cannot be matched by level:error. | - |
10.20.20.10;[2018-07-16 13:12:57];GET /online/sample HTTP/1.1;200
IP: 10.20.20.10request: GET /online/sample HTTP/1.1status: 200time: [2018-07-16 13:12:57]
Field Name | Field Type | Delimiter | Allow Chinese Characters | Enable Statistics |
IP | text | @&()='",;:<>[]{}/ \n\t\r | No | Yes |
request | text | @&()='",;:<>[]{}/ \n\t\r | No | Yes |
status | long | None | No | Yes |
time | text | @&()='",;:<>[]{}/ \n\t\r | No | Yes |
request:\/online\/login
\ is used to escape the / symbol (this symbol is a reserved symbol of the search syntax and therefore needs to be escaped)./ symbol is a delimiter, so the actual search condition is online OR login. A log containing onlineorlogin is considered to meet the search condition.request:"/online/login"
/ symbol does not need to be escaped.login and therefore does not meet the search condition.request:"/online/sample"
online and sample in the exact order as that in the search condition and therefore is considered to meet the search condition.request:"/online/login" | select count(*) as logCounts
request is "/online/login".* | select count() as logCounts,request group by request order by count() desc limit 10
Field Type | Delimiter | Chinese Characters | Statistics |
text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
long | Not involved | Not involved | Enabled |
double | Not involved | Not involved | Enabled |
key1:textValuekey2:123key3:{"ip":"123.123.123.132","url":"class/132.html","detail":{"status_code":"500","id":13}}
Field Name | Field Type | Delimiter | Chinese Characters | Statistics |
key1 | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
key2 | long | Not involved | Not involved | Enabled |
key3 | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
Field Name | Field Type | Delimiter | Chinese Characters | Statistics |
key1 | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
key2 | long | Not involved | Not involved | Enabled |
key3.ip | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
key3.url | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
key3.detail | text | @&?|#()='",;:<>[]{}/ \n\t\r\\ | Included | Enabled |
LogTag field (for more information, see the LogTag field in Uploading Log via API), while the raw log content is passed through the Log field. A metadata index needs to be configured for all data which is passed via LogTag. A metadata index is a key-value index in essence, adopting the same indexing rules and configuration methods as key-value indexes. The only difference is that the metadata field in a metadata index is identified by the specific prefix __TAG__. . For example, the region metadata field is indexed as __TAG__.region.10.20.20.10;[2018-07-16 13:12:57];GET /online/sample HTTP/1.1;200
region:ap-beijing, the structured log uploaded to CLS will be as follows:IP: 10.20.20.10request: GET /online/sample HTTP/1.1status: 200time: [2018-07-16 13:12:57]__TAG__.region:ap-beijing
Field Name | Delimiter |
__TAG__.region | @&()='",;:<>[]{}/ \n\t\r |
__TAG__.region:"ap-beijing", the sample log can be returned.Configuration Item | Description | Recommended Configuration |
Include built-in reserved fields in full-text index | Contain: The full-text index contains built-in fields __FILENAME__, __HOSTNAME__, and __SOURCE__, and full-text search and key-value search are supported, such as "/var/log/access.log" and __FILENAME__:"/var/log/access.log".Not contain: The full-text index does not contain the aforementioned built-in fields, and only key-value search can be used, such as __FILENAME__:"/var/log/access.log". | Contain |
Include metadata fields in full-text index | Contain: The full-text index contains all metadata fields (those prefixed with __TAG__), and log fields can be searched for directly with full-text search, such as ap-beijing.Not contain: The full-text index does not contain any metadata fields, and log fields can be searched for only with key-value search, such as __TAG__.region:ap-beijing. Key-value search is not supported for STANDARD_IA log topics, and fields cannot be searched for in this case.Contain only metadata fields with key-value index enabled: The full-text index contains metadata fields with key-value index enabled but not metadata fields with key-value index disabled. This option is not available for STANDARD_IA log topics. | Contain |
Log storage rule in case of index creation exception | In case of any exception during index creation for logs, CLS will store raw logs in __RAWLOG__ to avoid log loss. If index creation fails only for certain fields, the failed part can be stored in the specified field (which is RAWLOG_FALL_PART by default). For more information, see Handling rule for a log index creation exception. | Enable |
kye1 is a common field, and kye2 and kye3 are nested JSON fields.{"kye1": "http://www.example.com","kye2": {"address": {"country": "China","city": {"name": "Beijing","code": "065001"}},"contact": {"phone": {"home": "188xxxxxxxx","work": "187xxxxxxxx"},"email": "xxx@xxx.com"}},"kye3": {"address": {"country": "China","city": {"name": "Beijing","code": "065001"}},"contact": {"phone": {"home": "188xxxxxxxx","work": "187xxxxxxxx"},"email": "xxx@xxx.com"}}}
kye1 and kye2.address fields but not the kye3 field.kye2.address is displayed as a string, and its attributes and objects are not further expanded.kye2.contact is not configured with a key-value index, because kye2.address is configured with an index, kye2.contact as an object at the same level as kye2.address is also displayed as a string.kye3 is not configured with a key-value index, and therefore its attributes and objects are not expanded.Results parameter in the output parameters is as follows (other parameters are not affected and remain unchanged):{"Time": 1645065742008,"TopicId": "f813385f-aee0-4238-xxxx-c99b39aabe78","TopicName": "TestJsonParse","Source": "172.17.0.2","FileName": "/root/testLog/jsonParse.log","PkgId": "5CB847DA620DB3D4-10D","PkgLogId": "65536","HighLights": [],"Logs": null,"LogJson": "{\"kye1\":\"http://www.example.com\",\"kye2\":{\"address\":\"{\\\"country\\\":\\\"China\\\",\\\"city\\\":{\\\"name\\\":\\\"Beijing\\\",\\\"code\\\":\\\"065001\\\"}}\",\"contact\":\"{\\\"phone\\\":{\\\"home\\\":\\\"188xxxxxxxx\\\",\\\"work\\\":\\\"187xxxxxxxx\\\"},\\\"email\\\":\\\"xxx@xxx.com\\\"}\"},\"kye3\":\"{\\\"address\\\":{\\\"country\\\":\\\"China\\\",\\\"city\\\":{\\\"name\\\":\\\"Beijing\\\",\\\"code\\\":\\\"065001\\\"}},\\\"contact\\\":{\\\"phone\\\":{\\\"home\\\":\\\"188xxxxxxxx\\\",\\\"work\\\":\\\"187xxxxxxxx\\\"},\\\"email\\\":\\\"xxx@xxx.com\\\"}}\"}"}
kye2.address is a string, so its value is escaped as a string.kye2.contact is an object at the same level as kye2.address, and although kye2.contact is not configured with a key-value index, its value is also escaped as a string.kye3 is not configured with a key-value index and is escaped as a string as a whole.{"Time": 1645065742008,"TopicId": "f813385f-aee0-4238-xxxx-c99b39aabe78","TopicName": "zhengxinTestJsonParse","Source": "172.17.0.2","FileName": "/root/testLog/jsonParse.log","PkgId": "25D0A12F620DBB64-D3","PkgLogId": "65536","HighLights": [],"Logs": null,"LogJson": "{\"kye1\":\"http://www.example.com\",\"kye2\":{\"address\":\"{\\\"city\\\":{\\\"code\\\":\\\"065001\\\",\\\"name\\\":\\\"Beijing\\\"},\\\"country\\\":\\\"China\\\"}\",\"contact\":{\"phone\":{\"work\":\"187xxxxxxxx\",\"home\":\"188xxxxxxxx\"},\"email\":\"xxx@xxx.com\"}},\"kye3\":{\"address\":{\"country\":\"China\",\"city\":{\"code\":\"065001\",\"name\":\"Beijing\"}},\"contact\":{\"phone\":{\"work\":\"187xxxxxxxx\",\"home\":\"188xxxxxxxx\"},\"email\":\"xxx@xxx.com\"}}}"}
__RAWLOG__ for exception handling. This avoids log loss. __RAWLOG__ supports only full-text search (full-text index needs to be enabled) but not key-value search, key-value index, and statistical analysis. After full-text index is enabled, index traffic, index storage, and fees will still be calculated according to the full text of the raw log for the abnormal log, without additional fees.__RAWLOG__ field, and only full-text search can be used.__RAWLOG__ field and certain fields with a successfully created index (these fields support properly configuring key-value index and statistical analysis). In Index Configuration > Advanced Settings, you can also store abnormal fields in the specified field (which is RAWLOG_FALL_PART by default and supports configuring key-value index and statistical analysis).Last updated:2024-01-20 17:25:15
Last updated:2024-12-20 16:15:18


KeyA:xxx to perform retrieval, Topic B execution will report an error, and only logs related to Topic A can be viewed.Last updated:2024-01-20 17:25:15

Last updated:2024-12-20 16:17:33

Redirection type | Applicable Scenario |
Open External URL | Open the specified URL and carry the designated fields from the log as parameters in the URL, for example, query user information on the internal user management platform based on user_id. |
Search for other log topic | Retrieve the specified log topic and carry the designated fields from the log as retrieval conditions, for example, retrieve related logs in other log topics based on request_id. |


{{__currentValue__}} represents the value currently being clicked.stgw_request_id:{{__currentValue__}} to use the currently clicked value as the key-value search criteria. When searching for other log topics, it will automatically convert to stgw_request_id:"8da469b42947445891cc10fc55d75471" appearing in the search statement.


${__CurrentValue} indicates the currently clicked field value. When the field is word-segmented, this variable refers to the words after the word segmentation. For example, in the following figure, the separator / is behind kube-scheduler. When the mouse pointer is hovered, only kube-scheduler is highlighted. When a custom redirection is triggered by clicking, the corresponding ${__CurrentValue} is the kube-scheduler.
${__TopicId} indicates the current topic ID, such as a85bbd1c-233f-xxxx-aeda-70cbd9f8715a.${__StartTime} and ${__EndTime} indicate the start and end Unix timestamps of the current query time range.${} to enclose the field name as a variable to represent the complete value of that field. For example, in the above image, you can use ${userAgent} to represent the value of the userAgent field, ${userAgent}=kube-scheduler/v1.20.6 (linux/amd64) kubernetes/1cb721e/leader-election.Last updated:2024-01-20 17:25:15




Last updated:2024-09-20 17:48:27
disk_total

disk_total{host="6c74e00eb825", path="/etc/hostname"}

sum by (host,device)(disk_total)


Last updated:2024-12-25 14:41:47
Enter Path | Feature | Description | Details |
Folder Creation | Dashboards support folder creation for classifying and collapsing dashboards, making them easier to manage. | - | |
Create a Dashboard | CLS supports various ways to create a dashboard: Blank Dashboard: Create a blank dashboard that requires users to add charts manually and save them. Import Dashboard: Import a Json file of an existing CLS dashboard. The Json file is exported from an existing dashboard. Create from Dashboard Template (Recommended): CLS offers a wide range of dashboard templates. You can choose the right template according to your scenario and log content, enable it with one click, and a dashboard will be automatically generated. | ||
Filters and Variables | Filters: All chart data in the dashboard can be filtered by the specified field value. Variables: Users can set a variable value via static input or dynamic query and apply it to search statements, titles, and text. | ||
Table | A table is the most common type of data display, where data is organized for comparison and counting. It is suitable for most scenarios. | ||
Sequence Diagram | A sequence diagram requires statistics to have a sequence field so that it can organize and aggregate the metrics in chronological order. It visually reflects the change trend of a metric over time. It is suitable for trend analysis scenarios, for example, analyzing the trend of the daily number of 404 errors in the past week. | ||
Bar Chart | A bar chart describes categorical data. It visually reflects the comparison of each category in size. It is suitable for category statistics scenarios, for example, collecting the numbers of each type of error code in the last 24 hours. | ||
Pie Chart | A pie chart describes the proportions of different types. It measures the proportion of each type by the slice size. It is suitable for proportion statistics scenarios, for example, analyzing the proportions of different error codes. | ||
Individual Value Plot | An individual value plot describes a single metric, typically a key metric with business value. It is suitable for collecting daily, weekly, or monthly metrics such as PV and UV. | ||
Gauge Chart | A gauge chart describes a single metric. Unlike an individual value plot, it is generally used with a threshold to measure the metric status. It is suitable for rating scenarios, such as system health monitoring. | ||
Map | A map shows the geographic location of data through the position of graphics. It is generally used to display the distribution of data in different geographic locations. It is suitable for geographic statistics scenarios, such as the geographic distribution of attacker IPs. | ||
Sankey Diagram | A Sankey diagram is a special type of flow diagram used to describe the flow of one set of values to another set. It is suitable for directional statistics scenarios, such as firewall source and destination IP traffic. | ||
Word Cloud | A word cloud is a visual representation of the frequency of words. It is suitable for audit statistics scenarios, such as high-frequency personnel statistics. | ||
Funnel Chart | A funnel chart is suitable for business processes with one single flow direction and path. It collects the statistics of each stage and uses a trapezoidal area to represent the business volume difference between two stages. | ||
Log | Log charts allow you to save raw logs to the dashboard. You can quickly view the analysis result and the associated log content on the dashboard page, with no need to redirect to the search and analysis page. | ||
Text | Text type charts support the MarkDown syntax. You can insert text, image links, hyperlinks, etc. on the dashboard page. | ||
Heat Map | Heatmaps display statistical charts by coloring the blocks. For statistical indicators, higher values are represented by darker colors and lower values are represented by lighter colors. Heatmaps are suitable for viewing the overall situation, detecting anomalies, displaying differences among multiple variables, and checking whether there is any correlation between them. | ||
Data Conversion | Data conversion allows you to perform further processing of search results, including modifying data types, selecting fields for chart creation, and merging groups. This satisfies your chart creation needs without modifying SQL statements. | ||
Unit Configuration | Automatic unit conversion is available in charts. When you select an original unit, the value is automatically converted to the next higher unit if it meets the conversion factor. Units can be configured to display decimal places. | ||
Other Feature Configuration | Interaction Event | The chart has its interaction event feature, which allows clicking the chart content to trigger interactions such as opening up the search and analysis page, dashboard page, and third-party URL. | |
Add a Group | The dashboard supports chart grouping. Grouping allows you to categorize and collapse dashboard chart contents. | - | |
Subscribing to Dashboard | CLS allows you to subscribe to a dashboard and export it as an image. Daily, weekly, and monthly reports can be sent to specified recipients regularly via email or WeCom. It is available for those dashboards that need to send daily, monthly, and reports to the team. | ||
Chart Time Configuration | After turning off the use of global time, the chart time is controlled independently and no longer changes with the change in global time. Different charts in the dashboard can use different time ranges, supporting more varied comparison scenarios. |
Last updated:2024-01-20 17:31:30
Operation | Description |
Adding a chart | A dashboard also provides an entry for creating statistical charts, supporting simple chart creation and custom chart creation. For more information, please see Adding a Chart. |
Deleting a chart | You can delete an existing chart. |
Editing a chart | You can edit a chart on the chart editing page. |
Copying a chart | You can copy a chart to the current or another dashboard. |
Exporting chart data | You can export chart data in CSV format. |
Viewing data on the search and analysis page | You can quickly add search statements, log topics, and other information for the current chart on the search and analysis page. |
Quickly adding an alarm | You can quickly add search statements, log topics, and other information for the current chart on the alarm editing page. |
Full-screen browsing | Full-screen browsing of a single chart or the entire dashboard is supported. |
Refreshing | Automatically periodic data refreshing and manual data refreshing are supported. |
Adding a template variable | Template variables allow you to define and modify data query and filtering parameters in the dashboard more flexibly, which improves the reuse rate of the dashboard and the granularity of analysis. For more information, please see Template Variables. |
Viewing/Editing a dashboard | You can view a dashboard and edit the dashboard layout. |
Operation | Description |
Creating a dashboard | You can create a dashboard on the dashboard creation page. |
Deleting a dashboard | You can delete an existing dashboard. |
Modifying dashboard tags | You can modify a single dashboard tag or multiple dashboard tags at a time. |
Opening a dashboard | You can open a dashboard and go to the dashboard viewing/editing page. |
Searching for or filtering dashboards | You can search for or filter dashboards by a combination of dashboard attributes such as the dashboard ID, name, region, and tag. |
Last updated:2024-01-20 17:31:30

Form Element | Description |
Chart Name | The chart name. |
Target Dashboard | The dashboard type of the chart. If you select **To an existing dashboard**, the chart will be added to an existing dashboard. If you select **Create Dashboard**, you need to create a dashboard and add the chart to it. |
Dashboard | The dashboard name. |


Chart Element | Description |
Field | The list of current log topic fields. You can click or drag and drop a field to the condition input box on the right. |
Metric | A metric measures a certain characteristic of an item, generally a numeric field. You can drag and drop a field from the field list on the left or click the "+" icon to display a drop-down list to select a field. After adding a field, you can click the settings icon next to the field to modify the field's aggregate calculation mode, which is "AVG" by default. |
Dimension | A dimension is the perspective for analyzing a metric. It is generally a string-type field describing the attributes of an item. You can drag and drop a field from the field list on the left or click the "+" icon to display a drop-down list to select a field. |
Filter | A filter field filters data attributes. You can drag and drop a field from the field list on the left or click the "+" icon to display a drop-down list to select a field. After adding a field, you can click the settings icon next to the field to modify the field filter mode, which is "Exist" by default. |
Sort | A sorting field sorts statistical results. Only specified condition fields can be sorted. We recommend you click the "+" icon to display a drop-down list to select a field. After adding a field, you can click the settings icon next to the field to modify the sorting mode, which is "Ascending" by default. |
Row quantity limit | Row quantity limit filters the number of statistical results. After it is set, a certain number of statistical results will be displayed in reverse order. The valid range is 1–1,000, and the default value is `1000`. |


Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart name: Set the display name of the table, which can be left empty. |
Table | Alignment: Set the alignment mode of table content in the cells. By default, metric-type fields are aligned to the right, and dimension-type fields are aligned to the left. |
Standard configuration |


Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Chart Name | Set the display name of the table, which can be left empty. |
Legend | Set the chart legends. You can control the legend styles and positions and add comparison data to legends. |
Tooltip | Control the content style of the bubble tip displayed when the mouse is hovered over. |
Unit |
Configuration Item | Description |
Changes | After the changes feature is enabled, you can compare the data in a time period with the data in the same period X hours, days, months, or years ago. The comparison data is displayed as dotted lines in the chart. |
Configuration Item | Description |
Sequence diagram | Drawing Style: Set the display style of data on coordinate axes. If you select a line, column, or dot, it will be a line chart, histogram, or scatter plot respectively. Linear: Set whether to smooth the connections between points. Line Width: Control the thickness of lines. Fill: Control the transparency of the fill area. If this value is 0, there will be no fill. Display Point: Display data points. If there is no data, no points will be displayed. Null: Control the processing of a sequence point if there is no data on the point. This value is 0 by default. Stack: Control whether to display data in a stack. |
Axes | Show: Show/Hide axes. MAX/MIN: Control the maximum and minimum values displayed on coordinate axes. Coordinate areas greater than the maximum value or smaller than the minimum value will not be displayed. |



Configuration Item | Description |
Threshold configuration | Threshold Point: Set the threshold points. You can add multiple threshold intervals. You can click a threshold color to open the color picker to customize the color. Threshold Display: Control the style of threshold display, including three modes: threshold line, area filling, and both. If this option is disabled, no threshold will be used. |

* | select histogram( cast(__TIMESTAMP__ as timestamp),interval 1 minute) as time, count(*) as pv,count( distinct remote_addr) as uv group by time order by time desc limit 10000

* | select histogram( cast(__TIMESTAMP__ as timestamp),interval 1 minute) as time, protocol_type, count(*) as pv group by time, protocol_type order by time desc limit 10000

* | select date_trunc('minute', __TIMESTAMP__) as time, round(sum(case when status = 404 then 1.00 else 0.00 end)/ cast(count(*) as double)*100,3) as "404 proportion", round(sum(case when status >= 500 then 1.00 else 0.00 end)/cast(count(*) as double)*100,3) as "5XX proportion", round(sum(case when status >= 400 and status < 500 then 1.00 else 0.00 end)/cast(count(*) as double)*100,3) as "4XX proportion", round(sum(case when status >= 400 then 1.00 else 0.00 end)/cast(count(*) as double)*100,3) as "total failure rate" group by time order by time limit 10000

Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Standard configuration |
Configuration Item | Description |
Bar chart | Direction: Control the bar/column direction. A bar chart is horizontal, while a column chart is vertical. Sort By: Control the bar/column sorting order, which can be ascending or descending by metric. If there are multiple metrics, you need to select one for sorting. Sorting is disabled by default. Display Value: Control whether to display the value label of each bar/column. Bar/Column Mode: Grouped and stacked display modes are supported. |


Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Legend | Set the chart legends. You can control the legend styles and positions and add comparison data to legends. |
Standard configuration |
Configuration Item | Description |
Pie chart | Display Mode: Control the pie chart style. A solid chart is a pie chart, and a hollow chart is a donut chart. Sort By: Control the slice sorting order, which can be ascending and descending. Sorting is disabled by default. Merge Slices: Merge slices other than top N slices into the "Others" slice. If there are too many slices, you can use this feature to focus on top N slices. Label: Display pie chart labels. You can set name, value, and/or percentage as tags. |

Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Standard configuration |
Configuration Item | Description |
Changes | After the changes feature is enabled, you can compare the data in a time period with the data in the same period X hours, days, months, or years ago. You can choose to compare absolute values or percentages. |
Configuration Item | Description |
Individual value plot | Display: Control whether to display metric names on the individual value plot. Value: If a metric has multiple statistical results, they need to be aggregated to one value or one of them needs to be selected for display on the individual value plot. By default, the latest non-null value will be used. Metric: Set the target statistical metric, which is Auto by default, in which case the first metric field in the returned data will be selected. |
383 will be displayed. If you set Value to Sum, the sum of the three data entries will be displayed.Configuration Item | Description |
Threshold configuration | Threshold point: Set the threshold points. You can add multiple threshold intervals. You can click a threshold color to open the color picker to customize the color. |
Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Standard configuration |
Configuration Item | Description |
Threshold configuration | Threshold point: Set the threshold points. You can add multiple threshold intervals. You can click a threshold color to open the color picker to customize the color. MAX/MIN: Control the maximum and minimum values on the gauge. Data outside the range will not be displayed on the chart. |

Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Map | Location: Set the region range displayed on the map. You can select China map or world map. It is set to Auto by default, in which case the map is automatically adapted to the region information contained in the data. |
Legend | Set the chart legends. You can control the legend styles and positions and add comparison data to legends. |
Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Sankey diagram | Direction: Set the display direction of the Sankey diagram. |

Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Word cloud | Max Words: Control the maximum number of words to be displayed, i.e., top N words. Up to 100 words can be displayed. Font Size: Control the font size range of words in the word cloud. |
Standard configuration |
Last updated:2024-01-20 17:31:31
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Legend | Set the chart legends. You can control the legend styles and positions and configure the data to be displayed as legends. |
Standard configuration |
Configuration Item | Description |
Funnel chart | Display Value: Set the label display form of each stage in the funnel chart, which can be Value or Conversion rate. Max Rendered Stages: Set the number of rendered stages in the funnel chart, which can be up to 20. Conversion Rate: Set the calculation method of the conversion rate, which can be Percentage of the first stage or Percentage of the previous stage. |
* | select url, count(*) as pv group by url limit 5

Last updated:2024-01-20 17:31:30
Configuration Item | Description |
Basic information | Chart Name: Set the display name of the table, which can be left empty. |
Log | Layout: Select the original layout to display logs as text, or select the table layout to display logs as structured fields and field values in a table. Showed Field: Select the fields to be displayed. If this parameter is left empty, all fields will be displayed. Line Break: After it is enabled, log text in the original layout will be divided into lines by field, where each field occupies a line. Line No.: After it is enabled, the log number will be displayed. Log Time: After it is enabled, the log time will be displayed. |
Last updated:2024-03-11 16:07:32
Syntax | Note |
# Heading | Heading 1 |
## Heading | Heading 2 |
**Bold** | Bold font |
*Italic* | Italic font |
[Link](http://a.com) | Hyperlink |
 | Image link |
> Blockquote | Quote |
* List
* List
* List | List |
Horizontal rule:
--- | Dividing line |
`Inline code` | Inline code block |
```
# code block
print '3 backticks or'
print 'indent 4 spaces'
``` | Code block |
<font face="Microsoft Yahei" color=green size=5> Microsoft Yahei,green,size=5</font> | Styled text |
Last updated:2024-12-25 12:02:21
Configuration Item | Description |
Basic Information | Chart Name: Set the display name of the table, which can be left empty. |
Statistical Analysis Field | Define the chart fields and support manually specifying the fields corresponding to the chart content with default automatic adaptation. X-axis field: Select the field corresponding to the heatmap X-axis. Y-axis field: Select the field corresponding to the heatmap Y-axis. Metric field: Select the field corresponding to the heatmap block. |
Axes | Show X-axis: Show/hide X-axis. Y-axis position: Configure the display position of the Y-axis. |
Standard configuration | |
Visual mapping | Show/hide Visual Map Configuration. |
Interaction event | Support clicking the numbers in the chart to trigger interactive events, such as jumping to the set URL address, opening the search analysis page, opening the dashboard page, adding filter conditions, etc. For details, see Interaction Event. |
Configuration Item | Description |
Heat Map | Counts: Setting the display and hide of heatmap block value tags. The value is hidden by default. |
* | select histogram(__TIMESTAMP__,interval 1 hour) as time,isp,round(avg( "request_time" ),1) as "request_time" group by time,isp order by time limit 10000
* | select url,prov, count(*) as count group by url,prov order by count desc limit 1000
Last updated:2024-01-20 17:31:30


SELECT. Below is the result after some fields are selected:# (numeric) type will be identified as metrics, fields of the t (string) type will be identified as common dimensions, and fields of the time type will be identified as time dimensions based on the chart type. Then, they will be matched with the field attributes required for chart creation. For example, in a sequence diagram, the time dimension field needs to be the X-axis, and the metric field needs to be the Y-axis.time field in the above figure is treated as a common dimension, which doesn't meet the requirements for field attributes of a sequence diagram. If you modify the time field attribute to the time type, you can see that the time field changes to the time format (this operation is equivalent to the CAST function). At this point, you can use a sequence diagram.histogram function is usually used to process fields, and the result can be in a non-standard time format. Therefore, if the default field attribute is common dimension, the sequence diagram cannot be used. You need to use the CAST function to convert the field to the time type or use data conversion to change the field attribute to the time dimension.server_addr and server_name and collect the PV and UV of each group. If you want to merge the results by server_name, you can hide the server_addr field and merge the results in the selected server_name dimension. This operation is equivalent to the GROUP BY function.Last updated:2024-01-20 17:31:31
Auto is selected, two decimal places will be retained. You can modify the precision as needed.custom prefix. However, note that the custom unit is fixed and cannot be automatically converted. Therefore, if you want to use the raw unit in a unified manner without triggering automatic unit conversion, you can manually add a fixed unit.Last updated:2024-01-20 17:31:31
Type | Description | Scope |
Filter in the drop-down list | Filter data in all charts on the dashboard by specifying the field value. If statistics is enabled in the index configuration of the log topic for the filter field, the field value can be automatically obtained as a list item. | All charts on the dashboard |
Filter by search statement | Filter data in all charts on the dashboard by entering a search statement, that is, add a filter in the query statement of the charts.Filter by search statement includes filter by range, NOT, and full text. | All charts on the dashboard |
Data source variable | A data source variable enables batch switching data sources of the charts on the dashboard. It is applicable to scenarios such as applying a dashboard to multiple log topics and comparing data in blue and green on the dashboard. | Charts that use the variable on the dashboard |
Custom variable | A custom variable can be set to a static input or a value from a dynamic query and applied to search statements, titles, and text charts for quick batch statement modification. | Charts that use the variable on the dashboard |
Form Element | Description |
Type | Different types correspond to different configuration items and application scenarios. Here, select **Filter in the drop-down list**. |
Filter alias | It is the filter name displayed on the UI, which is optional. If it is left empty, the filter field will be used automatically. |
Log topic | It is the log topic to which the filter field belongs. |
Filter field | It is the object field to be filtered. |
Dynamic option | After it is enabled, the filter field value will be obtained automatically as the filter option. |
Static option | A static option is optional, needs to be added manually, and will be always displayed. You can configure its alias. |
Default filter | It is the default filter of the dashboard and is optional. |
Support for multiple items | After it is enabled, multiple filters can be selected as the filter condition. |
Form Element | Description |
Type | Different types correspond to different configuration items and application scenarios. Here, select **Filter by search statement**. |
Filter name | It is the unique filter name. |
Filter alias | It is the filter name displayed on the UI, which is optional. |
Log topic | It is the log topic to which the filter field belongs. |
Mode | It is the mode for inputting search statements. Here, interactive and statement modes are supported. |
Default filter | It is the default filter of the dashboard and is optional. |
Form Element | Description |
Variable type | It is the variable type. Different types correspond to different configuration items and application scenarios. Here, select **Data source variable**. |
Variable name | It is the name of the variable in the search statement and can contain only letters and digits. |
Displayed name | It is the variable name displayed on the dashboard, which is optional. If it is empty, the variable name will be used automatically. |
Data source scope | It is the optional scope of the variable value and defaults to **All Log Topics**. You can select **Custom Filter** and set a filter to view only log topics that meet the condition. |
Default log topic | It is the default log topic. |
Form Element | Description |
Type | It is the variable type. Different types correspond to different configuration items and application scenarios. Here, select **Data source variable**. |
Variable name | It is the name of the variable in the search statement and can contain only letters and digits. A variable is referenced in the format of ${Variable name}. |
Variable alias | It is the variable name displayed on the dashboard, which is optional. If it is empty, the variable name will be used automatically. |
Static variable value | A static variable value needs to be added manually and will be always displayed. You can configure its alias. |
Dynamic variable value | After it is enabled, you can select a log topic, enter a search and analysis statement, and use the search and analysis result as the optional variable value. |
Default value | The default value is the variable value and is required. |
body_bytes_sent:1344client_ip:127.0.0.1host:www.example.comhttp_method:POSThttp_referer:www.example.comhttp_user_agent:Mozilla/5.0proxy_upstream_name:proxy_upstream_name_4remote_user:examplereq_id:5EC4EE87A478DA3436A79550request_length:13506request_time:1http_status:201time:27/Oct/2021:03:25:24upstream_addr:219.147.70.216upstream_response_length:406upstream_response_time:18upstream_status:200interface:proxy/upstream/example/1
* | select histogram( cast(__TIMESTAMP__ as timestamp),interval 1 minute) as analytic_time, count(*) as pv group by analytic_time order by analytic_time limit 1000
http_status:>=400 | select histogram( cast(TIMESTAMP as timestamp),interval 1 minute) as analytic_time, count(*) as pv_lost group by analytic_time order by analytic_time limit 1000
* | select histogram( cast(TIMESTAMP as timestamp),interval 1 minute) as analytic_time, avg(request_time) as response_time group by analytic_time order by analytic_time limit 1000
${env} variable created in the previous step. Then, charts will use the value of the variable as the data source, that is, Log topic A (production environment).Last updated:2024-03-11 16:04:40
${__field.Name}: References the field name of the clicked value.
As shown below, clicking 8.4s triggers the redirect URL embedded with the ${__field.Name} variable, which will reference the field name of the value, that is, populate timecost in the URL.

${__value.raw}: References the clicked value (populated in the original format).
As shown below, clicking 8.4s triggers the redirect URL embedded with the ${__value.raw} variable, which will reference the raw data of the clicked value, that is, the value 8.4125 without unit or decimal place processing.

${__value.Text}: References the clicked value (populated in the string format).
As shown below, clicking 2020-10-27 17:21:00 triggers the redirect URL embedded with the ${__value.Text} variable, which will reference the clicked value and convert it into a string, that is, 2020-10-27%2017:21:00 (here, %20 is a URL-encoded space).

${__value.Numeric}: References the clicked value (populated in the numeric format).
As shown below, clicking 8.4s triggers the redirect URL embedded with the ${__value.Numeric} variable, which will reference the clicked value and convert it into a number, that is 8.4125. Here, a time value will be converted into a Unix timestamp in the numeric format, and a string value will fail to be referenced.

${__value.Time}: The timestamp of the clicked value (populated in the Unix time format).
As shown below, clicking 8.4s triggers the redirect URL embedded with the ${__value.Time} variable, which will reference the timestamp in the same line as the clicked value, that is, 2022-10-27 17:21:00 of analytic_time. The value will be further converted into the Unix format and populated as 1666891260000. If there is no such timestamp, reference will fail.

${__Fields.specific field}: Field value in the same line.
As shown below, clicking 8.4s triggers the redirect URL embedded with the ${__Fields.protocol_type} variable, which will reference the field value in the same line as the clicked value, that is, http2 of protocol_type.
https://console.intl.cloud.tencent.com/cls/search?region=xxxxxxx&topic_id=xxxxxxxx&query=server_addr:${__value.text} AND status:[400 TO 499]&time=now-1h,now
Last updated:2024-03-11 16:04:05
Last updated:2024-01-20 17:31:30
Tencent Cloud Product | Preset Dashboard |
CLB | CLB access log dashboard |
NGINX | NGINX access dashboard NGINX monitoring dashboard |
CDN | CDN access log - quality monitoring and analysis dashboard CDN access log - user behavior analysis dashboard |
COS | COS access log analysis dashboard |
FL | ENI flow log - advanced analysis dashboard CCN flow log - advanced analysis dashboard |
TKE | TKE audit log - overview dashboard TKE audit log - node operation dashboard TKE audit log - Kubernetes object operation overview dashboard TKE event log - overview dashboard TKE event log - abnormal event aggregation search dashboard |
Last updated:2024-01-20 17:31:30
Type | Description |
Tencent Cloud user | Select a Tencent Cloud user as the email recipient of the subscribed dashboard. Users with no email address configured cannot receive email notifications. |
Custom email address | Enter one or multiple custom email addresses. |
Last updated:2024-12-20 16:13:10
Scenario | Description |
![]() | Log Collection - Processing - Log Topic: Logs are collected to CLS, processed (filtered, structured), and then written to the log topic. As shown, data processing occurs before the log topic in the data pipeline, referred to as preprocessing of data. Performing log filtering in preprocessing can effectively reduce log write traffic, index traffic, index storage, and log storage. Performing log structuring in preprocessing, with key-value indexing enabled, allows for SQL analysis of logs, dashboard configuration, and alarms. |
![]() | Log Topic - Processing - Fixed Log Topic: Store the data from the source log topic into a log topic after processing, or distribute logs to multiple log topics. |
![]() | Log Topic - Processing - Dynamic Log Topic: Based on the field value of the source log topic, dynamically create log topics and distribute related logs to the corresponding log topics. For example, if there is a field named Service in the source log topic with values like "Mysql", "Nginx", "LB", etc., CLS can automatically create log topics named Mysql, Nginx, LB, etc., and write related logs into these topics. |
Last updated:2025-12-02 18:03:32


Configuration Item | Description | |
Task Name | Name of the data processing task, for example: my_transform. | |
Enabling Status | Task start/stop, default start. | |
Preprocessing Data | Turn on the switch. The feature entry for preprocessing: Entry 1: Toggle on the Preprocessing Data switch when creating a data processing task. Entry 2: You can also click Data Processing at the bottom of the Collection Configuration page to enter the preprocessing data editing page. | |
Log Topic | Specify the log topic to write pre-processing results to. | |
external data source | Add external data source, applicable to dimension table join scenarios. Currently only support Tencent Cloud MySQL, see res_rds_mysql function. Region: The region where the cloud MySQL instance is located TencentDB for MySQL instance: Please select in the pull-down menu Username: Enter your database username Password: Enter your database password Alias: Your MySQL alias name, which will be used in res_rds_mysql as parameter. Note: | |
Data processing service log | The data processing task running logs are saved in the cls_service_log log topic (free). The alarm feature in the monitoring chart depends on this log topic and is enabled by default. | |
Upload Processing Failure Logs | When enabled, logs that failed to be processed will write into the target topic. When turned off, this option will drop processing-failed logs. | |
Field Name in Processing Failure Logs | If your choice is to write failed logs to the target log topic, failure logs will be saved in this field, with a default name of ETLParseFailure. | |
Advanced Settings | Add environment variable: Add environment variables for the data processing task runtime. For example, add a pair of variables, name ENV_MYSQL_INTERVAL, value 300, then you can use refresh_interval=ENV_MYSQL_INTERVAL in the res_rds_mysql function, the task will parse into refresh_interval=300. Note: | |


Configuration Item | Description |
Task Name | Name of the data processing task, for example: my_transform. |
Enabling Status | Task start/stop, default start. |
Preprocessing Data | Turn off the switch. |
Source Log Topic | Data source of the data processing task. |
External data source | Add external data source, applicable to dimension table join scenarios. Currently only support Tencent Cloud MySQL, see res_rds_mysql function. Region: The region where the cloud MySQL instance is located. TencentDB for MySQL Instance: Please select in the pull-down menu. Username: Enter your database username. Password: Enter your database password. Alias: Your MySQL alias name, which will be used in res_rds_mysql as parameter. Note: |
Process Time Range | Specify the log scope for data processing. Note: Process only data in the lifecycle of a log topic. |
Target Log Topic | Select fixed log topic: Log topic: destination bucket for inventory output of data processing, configured as one or multiple. Target name: For example, in the source log topic, output loglevel=warning logs to Log Topic A, loglevel=error logs to Log Topic B, and loglevel=info logs to Log Topic C. You can configure the target names of Log Topic A, B, and C as warning, error, and info. |
Data processing service log | The data processing task running logs are saved in the cls_service_log log topic (free). The alarm feature in the monitoring chart depends on this log topic and is enabled by default. |
Upload Processing Failure Logs | When enabled, logs that failed to be processed will write into the target topic. When turned off, this option will drop processing-failed logs. |
Field Name in Processing Failure Logs | If your choice is to write failed logs to the target log topic, failure logs will be saved in this field, with a default name of ETLParseFailure. |
Advanced Settings | Add environment variable: Add environment variables for the data processing task runtime. For example, add a pair of variables, name ENV_MYSQL_INTERVAL, value 300, then you can use refresh_interval=ENV_MYSQL_INTERVAL in the res_rds_mysql function, the task will parse into refresh_interval=300. Note: |
Configuration Item | Description |
Task Name | Name of the data processing task, for example: my_transform. |
Enabling Status | Task start/stop, default start. |
Preprocessing Data | Turn off the switch. |
Source Log Topic | Data source of the data processing task. |
External data source | Add external data source, applicable to dimension table join scenarios. Currently only support Tencent Cloud MySQL, please refer to res_rds_mysql function. Region: The region where the cloud MySQL instance is located. TencentDB for MySQL Instance: Please select in the pull-down menu. Username: Enter your database username. Password: Enter your database password. Alias: Your MySQL alias name, which will be used in res_rds_mysql as parameter. Note: |
Process Time Range | Specify the log scope for data processing. Note: Process only data in the lifecycle of a log topic. |
Target Log Topic | Select Dynamic Log Topic. No configuration required for target log topic, it will be automatically generated according to the specified field value. |
Overrun handling | When the topic count generated by your data processing task exceeds the product spec, you can choose: Create a fallback logset and log topic, and write logs to the fallback topic (created when creating a task). Fallback logset: auto_undertake_logset, single-region single account next. Fallback topic: auto_undertake_topic_$(data processing task name). For example, if a user creates two data processing tasks etl_A and etl_B, two fallback topics will occur: auto_undertake_topic_etl_A, auto_undertake_topic_etl_B. Discard log data: Discard logs directly without creating a fallback topic. |
Data processing service log | The data processing task running logs are saved in the cls_service_log log topic (free). The Alarm feature in the monitoring chart depends on this log topic and is enabled by default. |
Upload Processing Failure Logs | When enabled, logs that failed to be processed will write into the target topic. When turned off, this option will drop processing-failed logs. |
Field Name in Processing Failure Logs | If your choice is to write failed logs to the target log topic, failure logs will be saved in this field, with a default name of ETLParseFailure. |
Advanced Settings | Add environment variable: Add environment variables for the data processing task runtime. For example, add a pair of variables, name ENV_MYSQL_INTERVAL, value 300, then you can use refresh_interval=ENV_MYSQL_INTERVAL in the res_rds_mysql function, the task will parse into refresh_interval=300. Note: |
{"content": "[2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d {\"version\": \"1.0\",\"user\": \"CGW\",\"password\": \"123\"}"}
Dialogue Turn | User Question | AI Assistant Reply |
First-round dialogue | Structure this log |
|
Second-round dialogue | The content is not standard JSON. An error occurred when using ext_json. First extract the JSON part from the content, then extract the node from the JSON. |
|
{"level":"INFO","password":"123","requestid":"328495eb-b562-478f-9d5d-3bf7e","time":"2021-11-24 11:11:08,232","user":"CGW","version":"1.0"}



Function Category | Visualization Function Name | Application Scenario |
Extract Key Value | JSON: Extract fields and field values from JSON nodes Separator: Extract field values based on the separator. Users are advised to enter the field name. Regular Expression: Extract field values using regular expressions. User input is required for the field name. | Log Structuring |
Log Processing | Filter Logs: Configure conditions for filtering logs (multiple conditions are in an OR relationship). For example, if field A exists or field B does not exist, filter out the log. Distribute Logs: Configure conditions for distributing logs. If status="error" and message contains "404", distribute to topic A If status="running" and message contains "200", distribute to topic B Retain Logs: Configure conditions for preserving logs. | Delete/Retain Logs |
Field Processing | Delete fields Rename Field | Delete/Rename Field |
Last updated:2024-01-20 17:44:35
Last updated:2025-06-11 17:26:46

Function | Description | Syntax Description | Return Value Type |
ext_sep | Extracts field value content based on a separator. | ext_sep("Source field name", "Target field 1,Target field 2,Target field...", sep="Separator", quote="Non-segmentation part", restrict=False, mode="overwrite") | Log after extraction (LOG) |
ext_sepstr | Extracts field value content based on specified characters (string). | ext_sepstr("Source field name","Target field 1,Target field 2,Target field...", sep="abc", restrict=False, mode="overwrite") | Log after extraction (LOG) |
ext_json | Extracts field values from JSON data. | Log after extraction (LOG) ext_json("Source field name",prefix="",suffix="",format="full",exclude_node="JSON nodes not to expand") | Log after extraction (LOG) |
ext_json_jmes | Extracts a field value based on a JMES expression. | ext_json_jmes("Source field name", jmes= "JSON extraction expression", output="Target field", ignore_null=True, mode="overwrite") | Log after extraction (LOG) |
ext_kv | Extracts field values by using two levels of separators. | ext_kv("Source field name", pair_sep=r"\s", kv_sep="=", prefix="", suffix="", mode="fill-auto") | Log after extraction (LOG) |
ext_regex | Extracts field values by using a regular expression. | ext_regex("Source field name", regex="Regular expression", output="Target field 1,Target field 2,Target field...", mode="overwrite") | Log after extraction (LOG) |
ext_first_notnull | Returns the first non-null and non-empty result value. | ext_first_notnull(Value 1,Value 2,...) | The first non-null result value |
Function | Description | Syntax Description | Return Value Type |
enrich_table | Uses CSV structure data to match fields in logs and, when matched fields are found, the function adds other fields and values in the CSV data to the source logs. | enrich_table("CSV source data", "CSV enrichment field", output="Target field 1,Target field 2,Target field....", mode="overwrite") | Mapped log (LOG) |
enrich_dict | Uses dict structure data to match a field value in a log. If the specified field and value match a key in the dict structure data, the function assigns the value of the key to another field in the log. | enrich_dict("JSON dictionary", "Source field name", output=Target field name, mode="overwrite") | Mapped log (LOG) |
Function | Description | Syntax Description | Return Value Type |
compose | Combines multiple operation functions. Providing combination capabilities similar to those of branch code blocks, this function can combine multiple operation functions and execute them in sequence. It can be used in combination with branches and output functions. | compose("Function 1","Function 2", ...) | Log (LOG) |
t_if | Executes a corresponding function if a condition is met and does not perform any processing if the condition is not met. | t_if("Condition", Function) | Log (LOG) |
t_if_not | Executes a corresponding function if a condition is not met and does not perform any processing if the condition is met. | t_if_not("Condition",Function) | Log (LOG) |
t_if_else | Executes a function based on the evaluation result of a condition. | t_if_else("Condition", Function 1, Function 2) | Log (LOG) |
t_switch | Executes different functions depending on whether branch conditions are met. If all conditions are not met, the data is deleted. | t_switch("Condition 1", Function 1, "Condition 2", Function 2, ...) | Log (LOG) |
Function | Description | Syntax Description | Return Value Type |
log_output | Outputs a row of log to a specified log topic. This function can be used independently or together with branch conditions. | log_output(Log topic alias) (The alias here is the target log topic alias specified when the data processing task is created.) | No return value |
log_split | Splits a row of log into multiple rows of logs based on the value of a specified field by using a separator and JMES expression. | log_split(Field name, sep=",", quote="\", jmes="", output="") | Log (LOG) |
log_drop | Deletes logs that meet a specified condition. | log_drop(Condition 1) | Log (LOG) |
log_keep | Retains logs that meet a specified condition. | log_keep(Condition 1) | Log (LOG) |
log_split_jsonarray_jmes | Splits and expands the JSON array in the log according to JMES syntax. | log_split_jsonarray_jmes("field", jmes="items", prefix="") | Log (LOG) |
Function | Description | Syntax Description | Return Value Type |
Extract Tag | Extract tag values from log fields and use these tag values as tags for dynamically generated topics. | extract_tag(tag name 1, tag value 1, tag name 2, tag value 2....) | Value string type (STRING) |
fields_drop | Deletes the fields that meet a specified condition. | fields_drop(Field name 1, Field name 2, ..., regex=False,nest=False) | Log (LOG) |
fields_keep | Retains the fields that meet a specified condition. | fields_keep(Field name 1, Field name 2, ..., regex=False) | Log (LOG) |
fields_pack | Matches field names based on a regular expression and encapsulates the matched fields into a new field whose value is in JSON format. | fields_pack(Target field name, include=".*", exclude="", drop_packed=False) | Log (LOG) |
fields_set | Sets field values or adds fields. | fields_set(Field name 1, Field value 1, Field name 2, Field value 2, mode="overwrite") | Log (LOG) |
fields_rename | Renames fields. | fields_rename(Field name 1, New field name 1, Field name 2, New field name 2, regex=False) | Log (LOG) |
has_field | If the specified field exists, returns `True`. Otherwise, returns `False`. | has_field(Field name) | Condition value (BOOL) |
not_has_field | If the specified field does not exist, returns `True`. Otherwise, returns `False`. | not_has_field(Field name) | Condition value (BOOL) |
v | Gets the value of a specified field and returns the corresponding string. | v(Field name) | Value string type (STRING) |
Function | Description | Syntax Description | Return Value Type |
array_get | Retrieve the value of the array, return String. | array_get(array,index_position) | Value string type (STRING) |
json_select | Extracts a JSON field value with a JMES expression and returns the JSON string of the extraction result. | json_select(v(Field name), jmes="") | Value string type (STRING) |
xml_to_json | Parses and converts an XML-formatted value to a JSON string. The input value must be an XML string. Otherwise, a conversion exception will occur. | xml_to_json(Field value) | Value string type (STRING) |
json_to_xml | Parses and converts a JSON string value to an XML string. | json_to_xml(Field value) | Value string type (STRING) |
if_json | Checks whether a value is a JSON string. | if_json(Field value) | Condition value (BOOL) |
Function | Description | Syntax Description | Return Value Type |
sensitive_detection | Sensitive information detection, for example, ID card, bank card. | sensitive_detection(scope="", ratio=1, discover_items="", replace_items="") | Value string type (STRING) |
regex_match | Matches data in full or partial match mode based on a regular expression and returns whether the match is successful. | regex_match(Field value, regex="", full=True) | Condition value (BOOL) |
regex_select | Matches data based on a regular expression and returns the corresponding partial match result. You can specify the sequence number of the matched expression and the sequence number of the group to return (partial match + sequence number of the specified matched group). If no data is matched, an empty string is returned. | regex_select(Field value, regex="", index=1, group=1) | Value string type (STRING) |
regex_split | Splits a string and returns a JSON array of the split strings (partial match). | regex_split(Field value, regex=\"\", limit=100) | Value string type (STRING) |
regex_replace | Matches data based on a regular expression and replaces the matched data (partial match). | regex_replace(Field value, regex="", replace="", count=0) | Value string type (STRING) |
regex_findall | Matches data based on a regular expression and returns a JSON array of the matched data (partial match). | regex_findall(Field value, regex="") | Value string type (STRING) |
Function | Description | Syntax Description | Return Value Type |
custom_cls_log_time | Customize log time. A new log time will be generated based on your processing rules. Seconds, milliseconds, microseconds, and nanoseconds are supported. | custom_cls_log_time(time) | STRING |
dt_str | Converts a time field value (a date string in a specific format or timestamp) to a target date string of a specified time zone and format. | dt_str(Value, format="Formatted string", zone="") | STRING |
dt_to_timestamp | Converts a time field value (a date string in a specified format; time zone specified) to a UTC timestamp. | dt_to_timestamp(Value, zone="") | STRING |
dt_from_timestamp | Converts a timestamp field value to a time string in the specified time zone. | dt_from_timestamp(Value, zone="") | STRING |
dt_now | Obtains the current datetime of the processing calculation. | dt_now(format="Formatted string", zone="") | STRING |
Function | Description | Syntax Description | Return Value Type |
str_exist | Find a substring within the specified range of values and return True or False. | str_exist(data1, data2, ignore_upper=False) | BOOL |
str_count | Searches for a substring in a specified range of a value and returns the number of occurrences of the substring. | str_count(Value, sub="", start=0, end=-1) | INT |
str_len | Returns the length of a string. | str_len(Value) | INT |
str_uppercase | Converts a string to uppercase. | str_uppercase(Value) | STRING |
str_lowercase | Converts a string to lowercase. | str_lowercase(Value) | STRING |
str_join | Concatenates input values by using a concatenation string. | str_join(Concatenation string 1, Value 1, Value 2, ...) | STRING |
str_replace | Replaces an old string with a new string. | str_replace(Value, old="", new="", count=0) | STRING |
str_format | Formats strings. | str_format(Formatting string, Value 1, Value 2, ...) | STRING |
str_strip | Deletes specified characters from a string concurrently from the start and end of the string, and returns the remaining part. | str_strip(Value, chars="\t\r\n") | STRING |
str_lstrip | Deletes specified characters from a string from the start of the string, and returns the remaining part. | str_strip(Value, chars="\t\r\n") | STRING |
str_rstrip | Deletes specified characters from a string from the end of the string, and returns the remaining part. | str_strip(Value, chars="\t\r\n") | STRING |
str_find | Checks whether a string contains a specified substring and returns the position of the first occurrence of the substring in the string. | str_find(Value, sub="", start=0, end=-1) | INT |
str_start_with | Checks whether a string starts with a specified string. | str_start_with(Value, sub="", start=0, end=-1) | BOOL |
str_end_with | Checks whether a string ends with a specified string. | str_end_with(Value, sub="", start=0, end=-1) | BOOL |
Function | Description | Syntax Description | Return Value Type |
op_if | Returns a value based on a specified condition. | op_if(Condition 1, Value 1, Value 2) | If the condition is `true`, `Value 1` is returned; otherwise, `Value 2` is returned. |
op_and | Performs the AND operation on values. If all the specified parameter values are evaluated to true, `True` is returned; otherwise, `False` is returned. | op_and(Value 1, Value 2, ...) | BOOL |
op_or | Performs the OR operation on values. If one or more of the specified parameter values are evaluated to false, `False` is returned; otherwise, `True` is returned. | op_or(Value 1, Value 2, ...) | BOOL |
op_not | Performs the NOT operation on values. | op_not(Value) | BOOL |
op_eq | Compares two values. If the values are equal, `True` is returned. | op_eq(Value 1, Value 2) | BOOL |
op_ge | Compares two values. If `Value 1` is greater than or equal to `Value 2`, `True` is returned. | op_ge(Value 1, Value 2) | BOOL |
op_gt | Compares two values. If `Value 1` is greater than `Value 2`, `True` is returned. | op_gt(Value 1, Value 2) | BOOL |
op_le | Compares two values. If `Value 1` is less than or equal to `Value 2`, `True` is returned. | op_le(Value 1, Value 2) | BOOL |
op_lt | Compares two values. If `Value 1` is less than `Value 2`, `True` is returned. | op_lt(Value 1, Value 2) | BOOL |
op_add | Returns the sum of two specified values. | op_add(Value 1, Value 2) | Calculation result |
op_sub | Returns the difference between two specified values. | op_sub(Value 1, Value 2) | Calculation result |
op_mul | Returns the product of two specified values. | op_mul(Value 1, Value 2) | Calculation result |
op_div | Returns the quotient of two specified values. | op_div(Value 1, Value 2) | Calculation result |
op_sum | Returns the sum of multiple specified values. | op_sum(Value 1, Value 2, ...) | Calculation result |
op_mod | Returns the remainder of a specified value divided by the other specified value. | op_mod(Value 1, Value 2) | Calculation result |
op_null | Checks whether a value is `null`. If so, `true` is returned; otherwise, `false` is returned. | op_null(Value) | BOOL |
op_notnull | Checks whether a value is not `null`. If so, `true` is returned; otherwise, `false` is returned. | op_notnull(Value) | BOOL |
op_str_eq | Compares string values. If they are equal to each other, `true` is returned. | op_str_eq(Value 1, Value 2, ignore_upper=False) | BOOL |
Function | Description | Syntax Description | Return Value Type |
ct_int | Converts a value (whose base can be specified) to a decimal integer. | ct_int(Value 1, base=10) | Calculation result |
ct_float | Converts a value to a floating-point number. | ct_float(Value) | Calculation result |
ct_str | Converts a value to a string. | ct_str(Value) | Calculation result |
ct_bool | Converts a value to a Boolean value. | ct_bool(Value) | Calculation result |
Function | Description | Syntax Description | Return Value Type |
md5_encoding | Calculate and return the MD5 checksum | md5_encoding(value) | Calculation result |
uuid | Generate a universally unique identifier (UUID). | uuid() | STRING |
str_encode | Encode a string in the specified format. | str_encode(data, encoding="utf8", errors="ignore") | STRING(UTF8 Format) |
str_decode | Decode a string in the specified format. | str_decode(data, encoding="utf8", errors="ignore") | STRING |
base64_encode | Encode the string in base64. | base64_encode(value, format="RFC3548") | STRING(base64 Format) |
base64_decode | Decode the string in base64. | base64_decode(value, format="RFC3548") | STRING |
decode_url | Decodes an encoded URL. | decode_url(Value) | STRING |
Function | Description | Syntax Description | Return Value Type |
geo_parse | Parses the geographical location. | geo_parse(Field value, keep=("country","province","city"), ip_sep=",") | JSON string |
is_subnet_of | Checks whether an IP is in the target IP range. Multiple IP ranges are supported. | is subnet of (network segments, iP) | BOOL |
Last updated:2025-12-03 18:30:47



Function Category | Visualization Function Name | Applicable Scenario |
Extract Key Value | Extract fields and field values from JSON nodes. Separator: Extract field values based on the separator. Users are advised to enter the field name. Regular Expression: Extract field values using a regular expression. User input is required for the field name. | log structuring |
Log Processing | Filter Logs: Configure conditions for filtering logs (multiple conditions are in an OR relationship). For example, if field A exists or field B does not exist, filter out the log. Distribute Logs: Configure conditions for distributing logs. For example, distribute logs with status="error" and message containing "404" to topic A. For example, distribute logs with status="running" and message containing "200" to topic B. Retain Logs: Configure conditions for preserving logs. | Delete/Preserve logs |
Process Field | Delete Fields Rename Field | Delete/Rename Field |
{"log": "{\"offset\":281,\"file\":{\"path\":\"/logs/gate.log\"}}","message": "2024-10-11 15:32:10.003 DEBUG [gateway3036810e0c33b] ","content":"cls_ETL|1.06s|fields_renamed"}
Visualization Function | Configure Project | Value in Example | Required | Description |
Processing function - JSON | Description | - | No | Fill in your description of the processing function |
| Field | log | Yes | Select the original field to process |
| Prefix of the new field. | - | No | Add a prefix to the extracted new field name |
| New field suffix. | - | No | Add a suffix to the extracted new field name |
| Original field auto-delete | ☑️ | No | Process and delete the original field |
Processing function - Regular | Description | - | No | Fill in your description of the processing function |
| Field | message | Yes | Select the original field to process |
| Regular Expression | (\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d{3}) ([A-Z]{5}|[A-Z]{4}) \[(.+)\] | Yes | Extract as new field dates Extract ([A-Z]{5}|[A-Z]{4}) as new field level Extract [(.+)\] as new field traceid |
| new field name | dates,level,traceid | Yes | Extract new field using regular expressions |
| Original field auto-delete | ☑️ | No | Process and delete the original field |
Processing function - Separator | Description | - | No | Fill in your description of the processing function |
| Field | content | Yes | Select the original field to process |
| Separator | : | Yes | Split logs into multiple field values |
| new field name | module,delay_time,msg | Yes | New field extracted using a delimiter |
| Original field auto-delete | ☑️ | No | Process and delete the original field |
{"dates":"2024-10-11 15:32:10.003","delay_time":"1.06s","level":"DEBUG","module":"cls_ETL","msg":"fields_renamed","offset":"281","path":"/logs/gate.log","traceid":"gateway3036810e0c33b"}
{"__FILENAME__": "python.log","__SOURCE__": "127.0.0.1","log_level": "ERROR","status": "404","time": "2024-07-21 05:17:30.421"}
Visualization Function | Configure Project | Value in Example | Required | Description |
Processing function - Distribute Logs | Description | - | No | Fill in your description of the processing function |
| Groups | log_level="info" | Yes | Configure your distribution conditions. When conditions are met, logs will be sent to the corresponding log topic. The example shows a log topic named info. |
| Group 2 | log_level="warning" | No | Configure your distribution conditions. When conditions are met, logs will be sent to the corresponding log topic. The example shows a log topic named warning. |
| Group 3 | log_level="error" and status="400" | No | Configure your distribution conditions. When conditions are met, logs will be sent to the corresponding log topic. The example shows a log topic named error. |
Last updated:2024-01-20 17:44:35

ext_sep("Source field name", "Target field 1,Target field 2,Target field...", sep="Separator", quote="Non-segmentation part"", restrict=False, mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | Name of an existing field in the user log |
output | A single field name or multiple new field names concatenated with commas | string | Yes | - | - |
sep | Separator | string | No | , | Any single character |
quote | Characters that enclose the value | string | No | - | - |
restrict | Handling mode when the number of extracted values is inconsistent with the number of target fields entered by the user: True: ignore the extraction function and do not perform any extraction processing. False: try to match the first few fields | bool | No | False | - |
mode | Write mode of the new field | string | No | overwrite | - |
{"content": "hello Go,hello Java,hello python"}
// Use a comma as the separator to divide the `content` field into three parts, corresponding to the `f1`, `f2`, and `f3` fields separately.ext_sep("content", "f1, f2, f3", sep=",", quote="", restrict=False, mode="overwrite")// Delete the `content` field.fields_drop("content")
{"f1":"hello Go","f2":"hello Java","f3":"hello python"}
content string as a whole by using quoteRaw log:{"content": " Go,%hello ,Java%,python"}
ext_sep("content", "f1, f2", quote="%", restrict=False)
// Though `%hello ,Java%` does contain a comma, it does not participate in separator extraction as a whole.{"content":" Go,%hello ,Java%,python","f1":" Go","f2":"hello ,Java"}
restrict=True indicates the number of divided values is different from the target fields, the function is not executed.Raw log:{"content": "1,2,3"}
ext_sep("content", "f1, f2", restrict=True)
{"content":"1,2,3"}
ext_sepstr("Source field name","Target field 1,Target field 2,Target field...", sep="abc", restrict=False, mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | Name of an existing field in the user log |
output | A single field name or multiple new field names concatenated with commas | string | Yes | - | - |
sep | Separator (string) | string | No | , | - |
restrict | Handling mode when the number of extracted values is inconsistent with the number of target fields entered by the user: True: ignore the extraction function and do not perform any extraction processing. False: try to match the first few fields | bool | No | False | - |
mode | Write mode of the new field | string | No | overwrite | - |
{"message":"1##2##3"}
// Use "##" as the separator to extract key-values.ext_sepstr("message", "f1,f2,f3,f4", sep="##")
// If the number of target fields is greater than the number of divided values, `""` is returned for the excessive fields.{"f1":"1","f2":"2","message":"1##2##3","f3":"3","f4":""}
ext_json("Source field name",prefix="",suffix="",format="full",exclude_node="JSON nodes not to expand")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
prefix | Prefix of the new field | string | No | - | - |
suffix | Suffix of the new field | string | No | - | - |
format | full: The field name format is in full path format (parent + sep + prefix + key + suffix).simple: non-full path format (prefix + key + suffix) | string | No | simple | - |
sep | Concatenation character, used to concatenate node names | string | No | # | - |
depth | Depth to which the function expands the source field, beyond which nodes will not be expanded any more | number | No | 100 | 1-500 |
expand_array | Whether to expand an array node | bool | No | False | - |
include_node | Allowlist of node names that match the specified regular expression | string | No | - | - |
exclude_node | Blocklist of node names that match the specified regular expression | string | No | - | - |
include_path | Allowlist of node paths that match the specified regular expression | string | No | - | - |
exclude_path | Allowlist of node paths that match the specified regular expression | string | No | - | - |
retain | Retains some special symbols without escaping them, such as \n and \t. | string | No | - | - |
escape | Whether to escape data. Default value: True. If special symbols are contained, escaping cannot be performed. | bool | No | True | - |
{"data": "{ \"k1\": 100, \"k2\": { \"k3\": 200, \"k4\": { \"k5\": 300}}}"}
ext_json("data")
{"data":"{ \"k1\": 100, \"k2\": { \"k3\": 200, \"k4\": { \"k5\": 300}}}","k1":"100","k3":"200","k5":"300"}
sub_field1
Raw log: {"content": "{\"sub_field1\":1,\"sub_field2\":\"2\"}"}
// `exclude_node=subfield1` indicates not to extract the node.ext_json("content", format="full", exclude_node="sub_field1")
{"sub_field2":"2","content":"{\"sub_field1\":1,\"sub_field2\":\"2\"}"}
prefix to subnodesRaw log:{"content": "{\"sub_field1\":{\"sub_sub_field3\":1},\"sub_field2\":\"2\"}"}
// When `sub_field2` is extracted, the prefix `udf\_` is automatically added to it, making it `udf\_\_sub\_field2`.ext_json("content", prefix="udf_", format="simple")
{"content":"{\"sub_field1\":{\"sub_sub_field3\":1},\"sub_field2\":\"2\"}","udf_sub_field2":"2","udf_sub_sub_field3":"1"}
// `format=full` indicates to retain the hierarchy of the extracted field name. When `sub_field2` is extracted, the name of its parent node is automatically to it, making it `#content#__sub_field2`.ext_json("content", prefix="__", format="full")
{"#content#__sub_field2":"2","#content#sub_field1#__sub_sub_field3":"1","content":"{\"sub_field1\":{\"sub_sub_field3\":1},\"sub_field2\":\"2\"}"}
{"content": "{\"sub_field1\":1,\"sub_field2\":\"\\n2\"}"}
ext_json("content",retain="\n")
{"sub_field2":"\\n2","content":"{\"sub_field1\":1,\"sub_field2\":\"\\n2\"}","sub_field1":"1"}
{"content": "{\"sub_field1\":1,\"sub_field2\":\"\\n2\\t\"}"}
ext_json("content",retain="\n,\t")
{"sub_field2":"\\n2\\t","content":"{\"sub_field1\":1,\"sub_field2\":\"\\n2\\t\"}","sub_field1":"1"}
{"message":"{\"ip\":\"183.6.104.157\",\"params\":\"[{\\\"tokenType\\\":\\\"RESERVED30\\\",\\\"otherTokenInfo\\\":{\\\"unionId\\\":\\\"123\\\"},\\\"unionId\\\":\\\"adv\\\"}]\"}"}
ext_json("message", escape=False)fields_drop("message")
{"ip":"183.6.104.157", "params":"[{\"tokenType\":\"RESERVED30\",\"otherTokenInfo\":{\"unionId\":\"123\"},\"unionId\":\"adv\"}]"}
ext_json_jmes("Source field name", jmes= "JSON extraction expression", output="Target field", ignore_null=True, mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
jmes | string | Yes | - | - | |
output | Output field name. Only a single field is supported. | string | Yes | - | - |
ignore_null | Whether to ignore a node whose value is null. The default value is True, ignoring fields whose value is null. Otherwise, an empty string is returned. | bool | No | True | - |
mode | Write mode of the new field. Default value: overwrite | string | No | overwrite | - |
{"content": "{\"a\":{\"b\":{\"c\":{\"d\":\"value\"}}}}"}
// `jmes="a.b.c.d"` means to extract the value of `a.b.c.d`.ext_json_jmes("content", jmes="a.b.c.d", output="target")
{"content":"{\"a\":{\"b\":{\"c\":{\"d\":\"value\"}}}}","target":"value"}
{"content": "{\"a\":{\"b\":{\"c\":{\"d\":\"value\"}}}}"}
// `jmes="a.b.c.d"` means to extract the value of `a.b.c`.ext_json_jmes("content", jmes="a.b.c", output="target")
{"content":"{\"a\":{\"b\":{\"c\":{\"d\":\"value\"}}}}","target":"{\"d\":\"value\"}"}
ext_regex("Source field name", regex="Regular expression", output="Target field 1,Target field 2,Target field.......", mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
regex | Regular expression. If the expression contains a special character, escaping is required. Otherwise, syntax error is reported. | string | Yes | - | - |
output | A single field name or multiple new field names concatenated with commas | string | No | - | - |
mode | Write mode of the new field. Default value: overwrite | string | No | overwrite | - |
{"content": "1234abcd5678"}
ext_regex("content", regex="\d+", output="target1,target2")
{"target2":"5678","content":"1234abcd5678","target1":"1234"}
{"content": "1234abcd"}
ext_regex("content", regex="(?<target1>\d+)(.*)", output="target2")
{"target2":"abcd","content":"1234abcd","target1":"1234"}
ext_kv("Source field name", pair_sep=r"\s", kv_sep="=", prefix="", suffix="", mode="fill-auto")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
pair_sep | Level-1 separator, separating multiple key-value pairs | string | Yes | - | - |
kv_sep | Level-2 separator, separating keys and values | string | Yes | - | - |
prefix | Prefix of the new field | string | No | - | - |
suffix | Suffix of the new field | string | No | - | - |
mode | Write mode of the new field. Default value: overwrite | string | No | - | - |
{"content": "a=1|b=2|c=3"}
ext_kv("content", pair_sep="|", kv_sep="=")
{"a":"1","b":"2","c":"3","content":"a=1|b=2|c=3"}
ext_first_notnull(value 1, value 2, ...)
Parameter | Description | Type | Required | Default Value | Value Range |
Variable parameter list | Parameters or expressions that participate in the calculation | string | Yes | - | - |
{"data1": null, "data2": "", "data3": "first not null"}
fields_set("result", ext_first_notnull(v("data1"), v("data2"), v("data3")))
{"result":"first not null","data3":"first not null","data2":"","data1":"null"}
ext_grok(Field value, grok="", extend="")
Parameter | Description | Type | Required | Default Value | Value Range |
field | Field value | string | Yes | - | - |
grok | Expression | string | Yes | - | - |
extend | Custom Grok expression | string | Yes | - | - |
{"content":"2019 June 24 \"I am iron man\""}
ext_grok("content", grok="%{YEAR:year} %{MONTH:month} %{MONTHDAY:day} %{QUOTEDSTRING:motto}")fields_drop("content")
{"day":"24", "month":"June", "motto":"I am iron man", "year":"2019"}
{"content":"Beijing-1104,Beijing-Beijing"}
ext_grok("content", grok="%{ID1:user_id1},%{ID2:user_id2}",extend="ID1=%{WORD}-%{INT},ID2=%{WORD}-%{WORD}")fields_drop("content")
{"user_id1":"Beijing-1104", "user_id2":"Beijing-Beijing"}
Last updated:2025-12-11 11:18:23
t_table_map(data, field, output_fields, missing=None, mode="fill-auto")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Target table (dimension table) | table | Yes | - | - |
field | Map the source field to the table in logs. If the corresponding field does not exist in logs, perform no operation. Supports String and String List. | any | Yes | - | - |
output_fields | Mapped fields, such as ["province", "pop"]. Supports String and String List. | any | Yes | - | - |
missing | When no match field is found, assign the parameter value to the output field output_fields. | String | No | - | - |
mode | Field coverage mode. Default is fill-auto. | String | No | fill-auto |

[{"user_id": 1},{"user_id": 3}]
//On the console, set the alias configuration of external data MySQL to hm, the db of mysql to test222, and the table name to test//Pull all data from MySQL and use the t_table_map function to join dimensional tablest_table_map(res_rds_mysql(alias="hm",database="test222",sql="select * from test"),"user_id",["gameid", "game"])
[{"user_id":"1"},{"game":"wangzhe","gameid":"123","user_id":"3"}]
id | game_id | game_name | region | game_details |
1 | 10001 | Honor of Kings | CN | MOBA |
2 | 10002 | League of Legends | NA | PC MOBA |
3 | 10003 | Genshin Impact | CN | RPG |
4 | 10004 | Black Myth: Wukong | CN | PC Game |
5 | 10005 | Diablo | NA | Role play |
[{"Pid": 1},{"Pid": 2},{"Pid": 3}]
//On the console, set the alias configuration of external data MySQL to hm, the db of mysql to test222, and the table name to test//Pull some data from MySQL and use the t_table_map function to join dimensional tables//The log field Pid relates to the id field in MySQL, with different names//Enrich the game_details field from MySQL, rename to game_info in logst_table_map(res_rds_mysql(alias="hm",database="test222",sql="select * from test where region='CN'"),[["Pid", "id"]],["game_name",["game_details","game_info"]])
[{"Pid":"1""game_info":"MOBA mobile game""game_name":"Honor of Kings"},{"Pid":"2"},{"Pid":"3""game_info":"open world RPG game""game_name":"Genshin Impact"}]
enrich_table("csv source data", "csv enrichment field", output="target field1, target field2, target field...", mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Input CSV data, where the first row is column names and the rest rows are corresponding values. Example: region,count\nbj, 200\ngz, 300 | String | Yes | - | - |
fields | Column name to match. If the field name in the CSV data is the same as the field with the same name in the log, the matching is successful. The value can be a single field name or multiple new field names concatenated with commas. | String | Yes | - | - |
output | Output field list. The value can be a single field name or multiple new field names concatenated with commas. | String | Yes | - | - |
mode | Write mode of the new field. Default value: overwrite | String | No | overwrite | - |
{"region": "gz"}
enrich_table("region,count\nbj,200\ngz,300", "region", output="count")
{"count":"300","region":"gz"}
enrich_dict("JSON dictionary", "source field name", output=target field name, mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Input dict data, which must be the escape string of JSON object, for example: {\200\":\"SUCCESS\". | String | Yes | - | - |
fields | Field name to match. If the value of the key in the dict data is the same as the value of the specified field, the matching is successful. The value can be a single field name or multiple new field names concatenated with commas. | String | Yes | - | - |
output | Target field list. After successful matching, the function writes the corresponding values in the dict data to the target field list. The value can be a single field name or multiple new field names concatenated with commas. | String | Yes | - | - |
mode | Write mode of the new field. | String | No | overwrite |
{"status": "500"}
enrich_dict("{\"200\":\"SUCCESS\",\"500\":\"FAILED\"}", "status", output="message")
{"message":"FAILED","status":"500"}
Last updated:2024-01-20 17:44:35

compose(Function 1,Function 2, ...)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter, function | The parameter must be a function whose return value type is LOG. | string | At least one function parameter | - | - |
enrich function first and then the fields_set function
Raw log:{"status": "500"}
// 1. `enrich` function: use the data in `dict` to enrich the raw log, where `status` is `500`, and generate a field (field `message` with value `Failed`) after the enrichment.//2. `fields_Set` function: add a field `new` and assign value `1` to it.compose(enrich_dict("{\"200\":\"SUCCESS\",\"500\":\"FAILED\"}", "status", output="message"), fields_set("new", 1))
// The final log contains 3 fields:{"new":"1","message":"FAILED","status":"500"}
{"status": "500"}
compose(fields_set("new", 1))
{"new":"1","status":"500"}
{"condition1": 0,"condition2": 1, "status": "500"}
t_if_else(v("condition2"), compose(fields_set("new", 1),log_output("target")), log_output("target2"))
target output: {"new":"1","condition1":"0","condition2":"1","status":"500"}
t_if(Condition 1, Function 1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
condition | Function expression whose return value is of bool type | bool | Yes | - | - |
function | Function expression whose return value is of LOG type | string | Yes | - | - |
{"condition": 1, "status": "500"}
t_if(True, fields_set("new", 1))
{"new":"1","condition":"1","status":"500"}
// If the value of `condition` is `1` (true), add a field `new` and assign value `1` to it.{"condition": 1, "status": "500"}
t_if(v("condition"), fields_set("new", 1))
{"new":"1","condition":"1","status":"500"}
t_if_not(Condition 1, Function 1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
condition | Function expression whose return value is of bool type | bool | Yes | - | - |
function | Function expression whose return value is of LOG type | string | Yes | - | - |
{"condition": 0, "status": "500"}
t_if_not(v("condition"), fields_set("new", 1))
{"new":"1","condition":"0","status":"500"}
t_if_else("Condition 1", Function 1, Function 2)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
condition | Function expression whose return value is of bool type | bool | Yes | - | - |
function | Function expression whose return value is of LOG type | string | Yes | - | - |
function | Function expression whose return value is of LOG type | string | Yes | - | - |
{"condition": 1, "status": "500"}
t_if_else(v("condition"), fields_set("new", 1), fields_set("new", 2))
{"new":"1","condition":"1","status":"500"}
t_switch("Condition 1", Function 1, "Condition 2", Function 2, ...)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter, which is a list of condition-function expression pairs | - | - | - | - |
{"condition1": 0,"condition2": 1, "status": "500"}
// If `condition1` is `1` (true), add a field `new` and assign value `1` to it. Here, `False` is returned for `condition1`, and therefore the `new` field (value: `1`) is not added; `True` is returned for `condition2`, and therefore the `new` field (value: `2`) is added.t_switch(v("condition1"), fields_set("new", 1), v("condition2"), fields_set("new", 2))
{"new":"2","condition1":"0","condition2":"1","status":"500"}
Last updated:2025-12-05 11:26:34

Parameter | Description | Type | Required | Default Value | Value Range |
alias | Log topic alias | string | Yes | - | - |
waring, info, and error) of the loglevel field.[{"loglevel": "warning"},{"loglevel": "info"},{"loglevel": "error"}]
// The `loglevel` field has 3 values (`waring`, `info`, and `error`) and therefore the log is distributed to 3 different log topics accordingly.t_switch(regex_match(v("loglevel"),regex="info"),log_output("info_log"),regex_match(v("loglevel"),regex="warning"),log_output("warning_log"),regex_match(v("loglevel"),regex="error"),log_output("error_log"))
log_auto_output(topic_name="", logset_name="", index_options="", period=3,storage_type=" ",hot_period=0)
Parameter | Description | Type | Required | Default Value | Parameter Description |
topic_name | Log Topic Name | string | y | - | The parameter topic_name contains "|". "|" will be removed in the generated topic name; The parameter topic_name exceeds 250 characters. The generated log topic name will only have the first 250 characters. Exceeding characters will be truncated. |
logset_name | Logset Name | string | y | - | - |
index_options | all_index: Enable key-value and full-text indexing no_index: Disable indexing content_index: Enable full-text indexing key_index: Enable key-value indexing | string | n | all_index | If storage_type=cold, i.e., infrequent storage, then all_index and key_index will not be effective, meaning infrequent storage does not support key-value indexing. |
period | Storage period, generally the range is 1 to 3600 days 3640 means permanent storage | number | n | 3 | 1 to 3600 days |
storage_type | Storage type of log topic, optional values. hot: Standard Storage cold: Infrequent Storage | string | n | hot | When it is cold, the minimum period is 7 days |
hot_period | 0: Disable log settlement Non-0: Number of days of standard storage after enabling log settlement HotPeriod must be greater than or equal to 7 and less than Period, effective only when StorageType is hot | number | n | 0 | - |
tag_dynamic | Add dynamic tags to the log topic. Use with the extract_tag() function to extract tag KV from log fields. For example: tag_dynamic=extract_tag(v("pd"),v("env"),v("team"), v("person")) | string | n | - | No more than 10 pairs of tags with tag_static |
tag_static | Add static tags to the log topic. For example: tag_static="Ckafka:test_env,developer_team:MikeWang" | string | n | - | No more than 10 pairs of tags with tag_dynamic |
[{"pd": "CLB","dateTime": "2023-05-25T00:00:26.579"},{"pd": "Ckafka","time": "2023-05-25T18:00:55.350+08:00"},{"pd": "COS","time": "2023-05-25T00:06:20.314+08:00"},{"pd": "CDN","time": "2023-05-25T00:03:52.051+08:00"}]
log_auto_output(v("pd"),"My Log Set",index_options="content_index", period=3)
log_split(Field name, sep=",", quote="\"", jmes="", output="")
Parameter | Description | Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
sep | Separator | string | No | , | Any single character |
quote | Characters that enclose the value | string | No | - | - |
jmes | string | No | - | - | |
output | Name of a single field | string | Yes | - | - |
field has multiple values{"field": "hello Go,hello Java,hello python","status":"500"}
// Use the separator "," to split the log into 3 logs.log_split("field", sep=",", output="new_field")
{"new_field":"hello Go","status":"500"}{"new_field":"hello Java","status":"500"}{"new_field":"hello python","status":"500"}
{"field": "{\"a\":{\"b\":{\"c\":{\"d\":\"a,b,c\"}}}}", "status": "500"}
// The value of `a.b.c.d` is `a,b,c`.log_split("field", jmes="a.b.c.d", output="new_field")
{"new_field":"a","status":"500"}{"new_field":"b","status":"500"}{"new_field":"c","status":"500"}
{"field": "{\"a\":{\"b\":{\"c\":{\"d\":[\"a\",\"b\",\"c\"]}}}}", "status": "500"}
log_split("field", jmes="a.b.c.d", output="new_field")
{"new_field":"a","status":"500"}{"new_field":"b","status":"500"}{"new_field":"c","status":"500"}
log_drop(Condition 1)
Parameter | Description | Type | Required | Default Value | Value Range |
condition | Function expression whose return value is of bool type | bool | Yes | - | - |
status is 200 and retain other logs.{"field": "a,b,c", "status": "500"}{"field": "a,b,c", "status": "200"}
log_drop(op_eq(v("status"), 200))
{"field":"a,b,c","status":"500"}
log_keep(Condition 1)
Parameter | Description | Type | Required | Default Value | Value Range |
condition | Function expression whose return value is of bool type | bool | Yes | - | - |
status is 500 and delete other logs.{"field": "a,b,c", "status": "500"}{"field": "a,b,c", "status": "200"}
log_keep(op_eq(v("status"), 500))
{"field":"a,b,c","status":"500"}
log_split_jsonarray_jmes("field", jmes="items", prefix="")
Parameter | Description | Type | Required | Default Value | Value Range |
field | Field to extract | string | Yes | - | - |
{"common":"common","result":"{\"target\":[{\"a\":\"a\"},{\"b\":\"b\"}]}"}
log_split_jsonarray_jmes("result",jmes="target")fields_drop("result")
{"common":"common", "a":"a"}{"common":"common", "b":"b"}
{"common":"common","target":"[{\"a\":\"a\"},{\"b\":\"b\"}]"}
log_split_jsonarray_jmes("target",prefix="prefix_")fields_drop("target")
{"prefix_a":"a", "common":"common"}{"prefix_b":"b", "common":"common"}
Last updated:2025-12-05 11:41:07

v(Field name)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field name | string | Yes | - | - |
{"message": "failed", "status": "500"}
fields_set("new_message", v("message"))
{"message": "failed", "new_message": "failed","status": "500"}
fields_drop(Field name 1, Field name 2, ..., regex=False,nest=False)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter, which can be a field name or regular expression of the field name | Variable parameter, which can be a field name or regular expression of the field name | string | Yes | - | - |
regex | Whether to enable regular expression and use the full match mode | bool | No | False | - |
nest | Whether the field is a nested field | bool | No | False | - |
{"field": "a,b,c", "status": "500"}
fields_drop("field")
{"status":"500"}
{"condition":"{\"a\":\"aaa\", \"c\":\"ccc\", \"e\":\"eee\"}","status":"500"}
// `nest=True` indicates that the field is a nested field. After `condition.a` and `condition.c` are deleted, only the `condition.e` field is left.t_if(if_json(v("condition")), fields_drop("condition.a", "condition.c", nest=True))
{"condition":"{\"e\":\"eee\"}","status":"500"}
{"App": "thcomm","Message": "{\"f_httpstatus\": \"200\",\"f_requestId\": \"2021-11-09 08:40:17.832\tINFO\tservices/http_service.go:361\tbb20ac02-fcbc-4a56-b1f1-4064853b79da\",\"f_url\": \"wechat.wecity.qq.com/trpcapi/MbpsPaymentServer/scanCode\"}"}
// `nest=True` indicates that the filed is a nested field. After `Message.f_requestId` and `Message.f_url` are deleted, only the `f_httpstatus` field is left.t_if(if_json(v("Message")), fields_drop("Message.f_requestId", "Message.f_url", nest=True))
{"App":"thcomm","Message":"{\"f_httpstatus\":\"200\"}"}
fields_keep(Field name 1, Field name 2, ..., regex=False)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter, which can be a field name or regular expression of the field name | Variable parameter, which can be a field name or regular expression of the field name | string | Yes | - | - |
regex | Whether to enable regular expression and use the full match mode | bool | No | False | - |
{"field": "a,b,c", "status": "500"}
fields_keep("field")
{"field":"a,b,c"}
fields_pack(Target field name, include=".*", exclude="", drop_packed=False)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
output | Name of the new field after encapsulation | string | Yes | - | - |
include | Regular expression to include the field name | string | No | - | - |
exclude | Regular expression to exclude the field name | string | No | - | - |
drop_packed | Whether to delete the original fields that are encapsulated | bool | No | False | - |
{"field_a": "a,b,c","field_b": "abc", "status": "500"}
fields_pack("new_field","field.*", drop_packed=False)
{"new_field":"{\"field_a\":\"a,b,c\",\"field_b\":\"abc\"}","field_a":"a,b,c","field_b":"abc","status":"500"}
fields_set(Field name 1, Field value 1, Field name 2, Field value 2, mode="overwrite")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter | List of key-value pairs | string | - | - | - |
mode | Field overwrite mode | string | No | overwrite | - |
{"Level": "Info"}
fields_set("Level", "Warning")
{"Level", "Warning"}
new and new2
Raw log:{"a": "1", "b": "2", "c": "3"}
fields_set("new", v("b"), "new2", v("c"))
{"a":"1","b":"2","c":"3","new":"2","new2":"3"}
fields_rename(Field name 1, New field name 1, Field name 2, New field name 2, regex=False)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Variable parameter | List of original-new field name pairs | string | - | - | - |
regex | Whether to enable regular expression match for field names. If yes, use a regular expression to match the original field name. If no, use equal match. | bool | No | False | - |
{"regieeen": "bj", "status": "500"}
fields_rename("reg.*", "region", regex=True)
{"region":"bj","status":"500"}
True. Otherwise, the function returns False.has_field(Field name)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field name | string | Yes | - | - |
{"regiooon": "bj", "status": "500"}
t_if(has_field("regiooon"), fields_rename("regiooon", "region"))
{"region":"bj","status":"500"}
True. Otherwise, the function returns False.not_has_field(Field name)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
field | Field name | string | Yes | - | - |
{"status": "500"}
t_if(not_has_field("message"), fields_set("no_message", True))
{"no_message":"TRUE","status":"500"}
log_auto_output(v("pd"),"My Log Set",index_options="content_index", period=3,tag_static="Ckafka:test_env,developer:MikeWang",tag_dynamic=extract_tag("pd",v("pd"),"team", v("team")))extract_tag(tag name 1, tag value 1, tag name 2, tag value 2....)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
Tag Name | Tag Name | string | Yes | - | - |
Tag Value | Tag Value | string | Yes | - | - |
[{"pd": "CDN", "team": "test"},{"pd": "CLS", "team": "product"},{"pd": "COS", "team": "sales"},{"pd": "CLS", "team": "test"},{"pd": "CKafka", "team": "product"}]
log_auto_output(v("pd"),"My Log Set",index_options="content_index", period=3,tag_static="Ckafka:test_env",tag_dynamic=extract_tag("pd",v("pd"),"team", v("team")))
Last updated:2025-12-03 18:30:47
json_add(key,data)
Parameter Name | Parameter Description | Parameter Type | Required | Parameter Default Value | Parameter Value Range |
key | json key | string | Yes | - | - |
data | Add a new node | dict | Yes | - | - |
{"content": "{\"a\":{\"b\":{\"c\":\"cc\"}}}"}
json_add("content", {"a":{"b":{"d":"dd"}}})
{"content": "{\"a\":{\"b\":{\"c\":\"cc\",\"d\":\"dd\"}}}"}
json_edit("result", path="", key="",value="333",index=1,mode="edit")
Parameter | Description | Type | Required | Default Value | Value Range |
field | Nested json corresponding key | string | Yes | - | - |
path | Need to delete or modify the directory corresponding to the target field. When the field to be deleted or modified is at the first level of nested json, leave it blank; when operating on array elements, fill in up to the key corresponding to the array. Support JMES grammar. | string | No | - | - |
key | Target field that needs to be deleted or modified. No need to specify when performing operations on array elements. | string | No | - | - |
value | New value to be set, required when modifying a value | string | No | - | - |
index | Fill in this field when operating on an array. Array elements start from 1. | number | No | 0 | - |
mode | mode, edit: modify, move: delete, default move | string | No | move | - |
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\"}","time":"1650440364"}
json_edit("content", path="", key="p18", value="hello", mode="edit")
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"hello\"}", "time":"1650440364"}
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\",\"info\":{\"province\":\"hubei\",\"geo\":{\"long\":\"111\",\"lati\":\"222\"}}}","time":"1650440364"}
json_edit("content", path="info.geo", key="long", value="333", mode="edit")
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\",\"info\":{\"province\":\"hubei\",\"geo\":{\"long\":\"333\",\"lati\":\"222\"}}}","time":"1650440364"}
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\",\"info\":{\"province\":\"hubei\",\"geo\":{\"long\":\"111\",\"lati\":\"222\"}}}","time":"1650440364"}
json_edit("content", path="", key="p18", mode="move")
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"info\":{\"province\":\"hubei\",\"geo\":{\"long\":\"111\",\"lati\":\"222\"}}}","time":"1650440364"}
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\",\"info\":{\"province\":[\"hello\",\"world\"],\"geo\":{\"long\":[\"1.0\",\"2.0\"],\"lati\":\"222\"}}}","time":"1650440364"}
json_edit("content", path="info.province", index=1, mode="move")
{"content":"{\"p9\":[\"0.0\",\"0.0\"],\"p18\":\"CN\",\"info\":{\"province\":[\"world\"],\"geo\":{\"long\":[\"1.0\",\"2.0\"],\"lati\":\"222\"}}}","time":"1650440364"}
json_select(data, jmes="")
Parameter | Description | Type | Required | Default Value | Value Range |
data | Field value, which can be extracted by other functions. | string | Yes | - | - |
jmes | string | Yes | - | - |
{"field": "{\"a\":{\"b\":{\"c\":{\"d\":\"success\"}}}}", "status": "500"}
fields_set("message", json_select(v("field"), jmes="a.b.c.d"))
{"field":"{\"a\":{\"b\":{\"c\":{\"d\":\"success\"}}}}","message":"success","status":"500"}
xml_to_json(data)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Field value. | string | Yes | - | - |
{"xml_field": "<note><to>B</to><from>A</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>", "status": "500"}
fields_set("json_field", xml_to_json(v("xml_field")))
{"xml_field":"<note><to>B</to><from>A</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></note>","json_field":"{\"to\":\"B\",\"from\":\"A\",\"heading\":\"Reminder\",\"body\":\"Don't forget me this weekend!\"}","status":"500"}
json_to_xml(data)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Field value. | string | Yes | - | - |
{"json_field":"{\"to\":\"B\",\"from\":\"A\",\"heading\":\"Reminder\",\"body\":\"Don't forget me this weekend!\"}", "status": "200"}
fields_set("xml_field", json_to_xml(v("json_field")))
{"json_field":"{\"to\":\"B\",\"from\":\"A\",\"heading\":\"Reminder\",\"body\":\"Don't forget me this weekend!\"}","xml_field":"<ObjectNode><to>B</to><from>A</from><heading>Reminder</heading><body>Don't forget me this weekend!</body></ObjectNode>","status":"200"}
if_json(data)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Field value. | string | Yes | - | - |
{"condition":"{\"a\":\"b\"}","status":"500"}
t_if(if_json(v("condition")), fields_set("new", 1))
{"new":"1","condition":"{\"a\":\"b\"}","status":"500"}
{"condition":"haha","status":"500"}
t_if(if_json(v("condition")), fields_set("new", 1))
{"condition":"haha","status":"500"}
array_get(array, index_position)
Parameter | Description | Type | Required | Default Value | Value Range |
Array | Array | string | Yes | - | - |
index_position | Get the value at a specific index in an array | int | Yes | - | - |
{"field1": "[1,2,3]"}
fields_set("field2", array_get(v("field1"), 0))
{"field1":"[1,2,3]","field2":"1"}
{"field1": "['tom','jerry','bobo']"}
fields_set("field2", array_get(v("field1"), 0))
{"field1":"['tom','jerry','bobo']","field2":"tom"}
Last updated:2024-01-20 17:44:35

Purpose | Raw Log | Regular Expression | Extraction Result |
Extract content in braces. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 14], "orderField": "createTime"}}} | \{[^\}]+\} | {"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 10], "orderField": "createTime"} |
Extract content in brackets. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 14], "orderField": "createTime"}}} | \[\S+\] | [328495eb-b562-478f-9d5d-3bf7e] [INFO] |
Extract time. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 14], "orderField": "createTime"}}} | \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3} | 2021-11-08 11:11:08,232 |
Extract uppercase characters of a specific length. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 14], "orderField": "createTime"}}} | [A-Z]{4} | INFO |
Extract lowercase characters of a specific length. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 15], "orderField": "createTime"}}} | [a-z]{6} | versio passwo timest interf create |
Extract letters and digits. | [2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d '{"version": "1.0", "user": "CGW", "password": "123", "timestamp": 1637723468, "interface": {"Name": "ListDetail", "para": {"owner": "1253", "limit": [10, 14], "orderField": "createTime"}}} | ([a-z]{3}):([0-9]{4}) | com:8080 |
regex_match(Field value, regex="", full=True)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Field value | string | Yes | - | - |
regex | Regular expression | string | Yes | - | - |
full | Whether to enable full match. For full match, the entire value must fully match the regular expression. For partial match, only part of the value needs to match the regular expression. | bool | No | True | - |
192.168.0.1 of the field IP (full=True). The regex_match function returns True for the case of full match.
Raw log:{"IP":"192.168.0.1", "status": "500"}
// Check whether the regular expression "192\.168.*" fully matches the value `192.168.0.1` of the field `IP` and save the result to the new field `matched`.t_if(regex_match(v("IP"), regex="192\.168.*", full=True), fields_set("matched", True))
{"IP":"192.168.0.1","matched":"TRUE","status":"500"}
192.168.0.1 of the field IP (full=False). The regex_match function returns True for the case of partial match.
Raw log:{"IP":"192.168.0.1", "status": "500"}
t_if(regex_match(v("ip"), regex="192", full=False), fields_set("matched", True))
{"IP":"192.168.0.1","matched":"TRUE","status":"500"}
regex_select(Field value, regex="", index=1, group=1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Field value | string | Yes | - | - |
regex | Regular expression | string | Yes | - | - |
index | Sequence number of the matched expression in the match result | number | No | First | - |
group | Sequence number of the matched group in the match result | number | No | First | - |
{"data":"hello123,world456", "status": "500"}
fields_set("match_result", regex_select(v("data"), regex="[a-z]+(\d+)",index=0, group=0))fields_set("match_result1", regex_select(v("data"), regex="[a-z]+(\d+)", index=1, group=0))fields_set("match_result2", regex_select(v("data"), regex="([a-z]+)(\d+)",index=0, group=0))fields_set("match_result3", regex_select(v("data"), regex="([a-z]+)(\d+)",index=0, group=1))
{"match_result2":"hello123","match_result1":"world456","data":"hello123,world456","match_result3":"hello","match_result":"hello123","status":"500"}
regex_split(Field value, regex=\"\", limit=100)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Field value | string | Yes | - | - |
regex | Regular expression | string | Yes | - | - |
limit | Maximum array length for splitting. When this length is exceeded, the excessive part will be split, constructed as an element, and added to the array. | number | No | 100 | - |
{"data":"hello123world456", "status": "500"}
fields_set("split_result", regex_split(v("data"), regex="\d+"))
{"data":"hello123world456","split_result":"[\"hello\",\"world\"]","status":"500"}
regex_replace(Field value, regex="", replace="", count=0)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Field value | string | Yes | - | - |
regex | Regular expression | string | Yes | - | - |
replace | Target string, which is used to replace the matched result | string | Yes | - | - |
count | Replacement count. The default value is 0, indicating complete replacement. | number | No | 0 | - |
{"data":"hello123world456", "status": "500"}
fields_set("replace_result", regex_replace(v("data"), regex="\d+", replace="", count=0))
{"replace_result":"helloworld","data":"hello123world456","status":"500"}
{"Id": "dev@12345","Ip": "11.111.137.225","phonenumber": "13912345678"}
// Mask the `Id` field. The result is `dev@***45`.fields_set("Id",regex_replace(v("Id"),regex="\d{3}", replace="***",count=0))fields_set("Id",regex_replace(v("Id"),regex="\S{2}", replace="**",count=1))// Mask the `phonenumber` field by replacing the middle 4 digits with ****. The result is `139****5678`.fields_set("phonenumber",regex_replace(v("phonenumber"),regex="(\d{0,3})\d{4}(\d{4})", replace="$1****$2"))// Mask the `Ip` field by replacing the octet with ***. The result is `11.***137.225`.fields_set("Ip",regex_replace(v("Ip"),regex="(\d+\.)\d+(\.\d+\.\d+)", replace="$1***$2",count=0))
{"Id":"**v@***45","Ip":"11.***.137.225","phonenumber":"139****5678"}
regex_findall(Field value, regex="")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Field value | string | Yes | - | - |
regex | Regular expression | string | Yes | - | - |
{"data":"hello123world456", "status": "500"}
fields_set("result", regex_findall(v("data"), regex="\d+"))
{"result":"[\"123\",\"456\"]","data":"hello123world456","status":"500"}
Last updated:2025-04-29 17:22:21
dt_str(Value, format="Formatted string", zone="")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | string | Yes | - | - | |
format | string | No | - | - | |
zone | string | No | UTC+00:00 | - |
{"date":"2014-04-26 13:13:44 +09:00"}
fields_set("result", dt_str(v("date"), format="yyyy-MM-dd HH:mm:ss", zone="UTC+8"))
{"date":"2014-04-26 13:13:44 +09:00","result":"2014-04-26 12:13:44"}
dt_to_timestamp(Value, zone="")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | string | Yes | - | - | |
zone | UTC time is used by default, without a time zone specified. If you specify a time zone, make sure that it corresponds to the time field value. Otherwise, a time zone error occurs. For time zone definitions, see ZoneId. | string | No | UTC+00:00 | - |
{"date":"2021-10-26 15:48:15"}
fields_set("result", dt_to_timestamp(v("date"), zone="UTC+8"))
{"date":"2021-10-26 15:48:15","result":"1635234495000"}
dt_from_timestamp(Value, zone="")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | string | Yes | - | - | |
zone | string | No | UTC+00:00 | - |
{"date":"1635234495000"}
fields_set("result", dt_from_timestamp(v("date"), zone="UTC+8"))
{"date":"1635234495000","result":"2021-10-26 15:48:15"}
dt_now(format="Formatted string", zone="")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
format | string | No | - | - | |
zone | string | No | UTC+00:00 | - |
{"date":"1635234495000"}
fields_set("now", dt_now(format="yyyy-MM-dd HH:mm:ss", zone="UTC+8"))
{"date":"1635234495000","now":"2021-MM-dd HH:mm:ss"}
custom_cls_log_time(time)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
time | UTC Timestamp Type. For the definition of time zone, refer to ZoneId. It supports seconds, milliseconds, microseconds, and nanoseconds. For example, 1565064739000. | string | No | UTC+00:00 | - |
{"field1": "1","time":"06/Aug/2019 12:12:19"}
custom_cls_log_time(dt_to_timestamp(v("time"), zone="UTC+8"))
{"__TIMESTAMP__":"1565064739000", "field1":"1", "time":"06/Aug/2019 12:12:19"}
Last updated:2025-07-16 16:09:59
str_exist(data1, data2, ignore_upper=False)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data1 | Value of string type | string | Yes | - | - |
data2 | Value of string type | string | Yes | - | - |
ignore_upper | Is It Case sensitivity | Bool | No | False | - |
{"data": "cls nihao"}
fields_set("result", str_exist(v(data), "nihao"))
{"result":"true","data":"cls nihao"}
str_count(Value, sub="", start=0, end=-1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
sub | Substring whose number of occurrences you want to count | string | Yes | - | - |
start | Start position to search | number | No | 0 | - |
end | End position to search | number | No | -1 | - |
{"data": "warn,error,error"}
fields_set("result", str_count(v("data"), sub="err"))
{"result":"2","data":"warn,error,error"}
str_len(Value)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
{"data": "warn,error,error"}
fields_set("result", str_len(v("data")))
{"result":"16","data":"warn,error,error"}
str_uppercase(Value)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
{"data": "warn,error,error"}
fields_set("result", str_uppercase(v("data")))
{"result":"WARN,ERROR,ERROR","data":"warn,error,error"}
str_lowercase(Value)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
fields_set("result", str_lowercase(v("data")))
{"data": "WARN,ERROR,ERROR"}
{"result":"warn,error,error","data":"WARN,ERROR,ERROR"}
str_join(Concatenation string 1, Value 1, Value 2, ...)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
join | Value of string type | string | Yes | - | - |
Value parameter, list of variable parameters | Value of string type | string | Yes | - | - |
{"data": "WARN,ERROR,ERROR"}
fields_set("result", str_join(",", v("data"), "INFO"))
{"result":"WARN,ERROR,ERROR,INFO","data":"WARN,ERROR,ERROR"}
str_replace(Value, old="", new="", count=0)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
old | String to the replaced | string | Yes | - | - |
new | Target string after replacement | string | Yes | - | - |
count | Maximum replacement count. The default value is 0, replacing all matched content. | number | No | 0 | - |
data field with "ERROR".{"data": "WARN,ERROR,ERROR"}
fields_set("result", str_replace( v("data"), old="WARN", new="ERROR"))
result.
Processing result:{"result":"ERROR,ERROR,ERROR","data":"WARN,ERROR,ERROR"}
str_format(Formatted string, Value 1, Value 2, ...)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
format | Target format, using "{}" as placeholders. The numbers in "{}" correspond to the sequence numbers of the parameter values, and the numbers start from 0. For usage details, see MessageFormat.format. | string | Yes | - | - |
Value parameter, list of variable parameters | Value of string type | string | Yes | - | - |
{"status": 200, "message":"OK"}
fields_set("result", str_format("status:{0}, message:{1}", v("status"), v("message")))
{"result":"status:200, message:OK","message":"OK","status":"200"}
str_strip(Value, chars="\t\r\n")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
chars | String to delete | string | No | \t\r\n | - |
{"data": " abc "}
fields_set("result", str_strip(v("data"), chars=" "))
{"result":"abc","data":" abc "}
{"data": " **abc** "}
fields_set("result", str_strip(v("data"), chars=" *"))
{"result":"abc","data":" **abc** "}
str_strip(Value, chars="\t\r\n")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
chars | String to delete | string | No | \t\r\n | - |
{"data": " abc "}
fields_set("result", str_lstrip(v("data"), chars=" "))
{"result":"abc ","data":" abc "}
str_strip(Value, chars="\t\r\n")
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
chars | String to delete | string | No | \t\r\n | - |
{"data": " abc "}
fields_set("result", str_rstrip(v("data"), chars=" "))
{"result":" abc","data":" abc "}
str_find(Value, sub="", start=0, end=-1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
sub | Substring whose number of occurrences you want to count | string | Yes | - | - |
start | Start position to search | number | No | 0 | - |
end | End position to search | number | No | -1 | - |
{"data": "warn,error,error"}
fields_set("result", str_find(v("data"), sub="err"))
{"result":"5","data":"warn,error,error"}
str_start_with(Value, sub="", start=0, end=-1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
sub | Prefix string or character | string | Yes | - | - |
start | Start position to search | number | No | 0 | - |
end | End position to search | number | No | -1 | - |
{"data": "something"}
fields_set("result", str_start_with(v("data"), sub="some"))
{"result":"true","data":"something"}
{"data": "something"}
fields_set("result", str_start_with(v("data"), sub="*"))
{"result":"false","data":"something"}
str_end_with(Value, sub="", start=0, end=-1)
Parameter | Description | Parameter Type | Required | Default Value | Value Range |
data | Value of string type | string | Yes | - | - |
sub | Prefix string or character | string | Yes | - | - |
start | Start position to search | number | No | 0 | - |
end | End position to search | number | No | -1 | - |
{"data": "endwith something"}
fields_set("result", str_end_with(v("data"), sub="ing"))
{"result":"true","data":"endwith something"}
Last updated:2024-01-20 17:44:35
ct_int(Value 1, base=10)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
base | Base | number | No | 10 | [2-36] |
{"field1": "10"}
fields_set("result", ct_int(v("field1")))
{"result":"10","field1":"10"}
{"field1": "AB"}
fields_set("result", ct_int(v("field1"), 16))
{"result":"171","field1":"AB"}
ct_float(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "123"}
fields_set("result", ct_float(v("field1")))
{"result":"123.0","field1":"123"}
ct_str(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": 123}
fields_set("result", ct_str(v("field1")))
{"result":"123","field1":"123"}
ct_bool(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{}
fields_set("result", ct_bool(0))
{"result":"false"}
{}
fields_set("result", ct_bool(1))
{"result":"true"}
{"field1": 1}
fields_set("result", ct_bool(v("field1")))
{"result":"true","field1":"1"}
Last updated:2025-12-05 11:51:10

op_if(Condition 1, Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
condition | Condition expression | bool | Yes | - | - |
data1 | If the condition is True, the value of this parameter is returned. | string | Yes | - | - |
data2 | If the condition is False, the value of this parameter is returned. | string | Yes | - | - |
{"data": "abc"}
fields_set("result", op_if(True, v("data"), "false"))
{"result":"abc","data":"abc"}
{"data": "abc"}
fields_set("result", op_if(False, v("data"), "123"))
{"result":"123","data":"abc"}
True is returned. Otherwise, False is returned.op_and(Value 1, Value 2, ...)
Parameter | Description | Type | Required | Default Value | Value Range |
Variable parameter list | Parameters or expressions that participate in the calculation | string | Yes | - | - |
{}
fields_set("result", op_and(True, False))
{"result":"false"}
{}
fields_set("result", op_and(1, 1))
{"result":"true"}
{"data":"false"}
fields_set("result", op_and(1, v("data")))
{"result":"false","data":"false"}
False is returned. Otherwise, True is returned.op_or(Value 1, Value 2, ...)
Parameter | Description | Type | Required | Default Value | Value Range |
Variable parameter list | Parameters or expressions that participate in the calculation | string | Yes | - | - |
{}
fields_set("result", op_or(True, False))
{"result":"true"}
op_not(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Value of any type | any | Yes | - | - |
{}
fields_set("result", op_not(True))
{"result":"false"}
{}
fields_set("result", op_not("True"))
{"result":"false"}
True is returned.op_eq(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
Post and Get fields are equal
Raw log:{"Post": "10", "Get": "11"}
fields_set("result", op_eq(v("Post"), v("Get")))
result.
Processing result:{"result":"false","Post":"10","Get":"11"}
field1 and field2 fields are equal
Raw log:{"field1": "1", "field2": "1"}
fields_set("result", op_eq(v("field1"), v("field2")))
{"result":"true","field1":"1","field2":"1"}
Value 1 is greater than or equal to Value 2, True is returned.op_ge(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "20", "field2": "9"}
fields_set("result", op_ge(v("field1"), v("field2")))
{"result":"true","field1":"20","field2":"9"}
{"field1": "2", "field2": "2"}
fields_set("result", op_ge(v("field1"), v("field2")))
{"result":"true","field1":"2","field2":"2"}
Value 1 is greater than Value 2, True is returned.op_gt(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "20", "field2": "9"}
fields_set("result", op_ge(v("field1"), v("field2")))
{"result":"true","field1":"20","field2":"9"}
Value 1 is less than or equal to Value 2, True is returned.op_le(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "2", "field2": "2"}
fields_set("result", op_le(v("field1"), v("field2")))
{"result":"true","field1":"2","field2":"2"}
Value 1 is less than Value 2, True is returned.op_lt(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "2", "field2": "3"}
fields_set("result", op_lt(v("field1"), v("field2")))
{"result":"true","field1":"2","field2":"3"}
op_add(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1", "field2": "2"}
fields_set("result", op_add(v("field1"), v("field2")))
{"result":"3","field1":"1","field2":"2"}
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1", "field2": "2"}
fields_set("result", op_sub(v("field1"), v("field2")))
{"result":"-1","field1":"1","field2":"2"}
op_mul(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1", "field2": "2"}
fields_set("result", op_mul(v("field1"), v("field2")))
{"result":"2","field1":"1","field2":"2"}
op_div(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1", "field2": "2"}
fields_set("result", op_div(v("field1"), v("field2")))
{"result":"0","field1":"1","field2":"2"}
{"field1": "1.0", "field2": "2"}
fields_set("result", op_div(v("field1"), v("field2")))
{"result":"0.5","field1":"1.0","field2":"2"}
op_sum(Value 1, Value 2, ...)
Parameter | Description | Type | Required | Default Value | Value Range |
Variable parameter list | Numeric value or string that can be converted to a numeric value | string | Yes | - | - |
{"field1": "1.0", "field2": "10"}
fields_set("result", op_sum(v("field1"), v("field2")))
{"result":"11.0","field1":"1.0","field2":"10"}
op_mod(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1.0", "field2": "0"}
fields_set("result", op_mod(v("field1"), v("field2")))
{"result":"2","field1":"1","field2":"2"}
{"field1": "1.0", "field2": "5"}
fields_set("result", op_mod(v("field1"), v("field2")))
{"result":"1.0","field1":"1.0","field2":"5"}
{"field1": "6", "field2": "4"}
fields_set("result", op_mod(v("field1"), v("field2")))
{"result":"2","field1":"6","field2":"4"}
null. If so, true is returned; otherwise, false is returned.op_null(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Value of any type | any | Yes | - | - |
{}
fields_set("result", op_null("null"))
{"result":"true"}
{"data": null}
fields_set("result", op_null(v("data")))
{"data": "null", "result":"true"}
null. If so, true is returned; otherwise, false is returned.op_notnull(Value)
Parameter | Description | Type | Required | Default Value | Value Range |
data | Value of any type | any | Yes | - | - |
{}
fields_set("result", op_notnull("null"))
{"result":"false"}
{"data": null}
fields_set("result", op_notnull(v("data")))
{"data": "null", "result":"false"}
true is returned.op_str_eq(Value 1, Value 2, ignore_upper=False)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | String value | string | Yes | - | - |
data2 | String value | string | Yes | - | - |
ignore_upper | Case sensitivity | bool | No | False | - |
{"field": "cls"}
fields_set("result", op_str_eq(v("field"), "cls"))
{"result":"true","field":"cls"}
{"field": "cls"}
fields_set("result", op_str_eq(v("field"), "etl|cls|data"))
{"result":"true","field":"cls"}
{"field": "CLS"}
fields_set("result", op_str_eq(v("field"), "cls", ignore_upper=True))
{"result":"true","field":"CLS"}
{"field": "CLS"}
fields_set("result", op_str_eq(v("field"), "etl|cls|data", ignore_upper=True))
{"result":"true","field":"CLS"}
random(Value 1, Value 2)
Parameter | Description | Type | Required | Default Value | Value Range |
data1 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
data2 | Numeric value or string that can be converted to a numeric value | number | Yes | - | - |
{"field1": "1"}
log_keep(op_eq(random(1, 5), 3))
{"field1": "1"}
{"field1": "1"}
fields_set("field2", random(1, 5))
{"field1":"1", "field2":"4"}
Last updated:2025-04-30 09:45:11
decode_url(value)
Parameter | Description | Type | Required | Default Value | Value Range |
url | URL value | string | Yes | - | - |
{"url":"https%3A%2F%2Fcloud.tencent.com%2F"}
fields_set("result",decode_url(v("url")))
{"result":"https://cloud.tencent.com/","url":"https%3A%2F%2Fcloud.tencent.com%2F"}
md5_encoding(value)
Parameter | Description | Type | Required | Default Value | Value Range |
Value | The data for which to calculate the MD5 checksum | String | Yes | - | - |
{"field": "haha"}
fields_set("field", md5_encoding(v("field")))
{"field":"4e4d6c332b6fe62a63afe56171fd3725"}
uuid()
{"key":"value"}
fields_set("field",uuid())
{"field":"8c2db704-45c0-4ea1-9e2c-cf9c966e35cd","key":"value"}
str_encode(data, encoding="utf8", errors="ignore")
Parameter | Description | Type | Required | Default Value | Value Range |
data | Data to be encoded | String | Yes | - | - |
encoding | Encoding format, utf8 by default, supporting ASCII, latin1, and unicode-escape. | String | No | utf8 | - |
errors | Handling method when encoding format cannot recognize characters: ignore (default): Ignore and do not encode. Strict control: Report an error directly and discard this log data. replace: Use a half-angle question mark (?), and replace the unrecognized part. xmlcharrefreplace: Use corresponding XML character references to replace unrecognized parts | String | No | ignore | - |
{"field1": "asd encode encode \\u1234"}
fields_set("field2", str_decode(str_encode(v("field1"), "unicode-escape"), "unicode-escape"))
{"field1":"asd encode encode \\u1234","field2":"asd code code ሴ"}
str_decode(data, encoding="utf8", errors="ignore")
Parameter | Description | Type | Required | Default Value | Value Range |
data | Data to be decoded | String | Yes | - | - |
encoding | Encoding format, utf8 by default, supporting ASCII, latin1, and unicode-escape. | String | No | utf8 | - |
errors | Handling method when encoding format cannot recognize characters: ignore (default): Ignore and do not decode. Strict control: Report an error directly and discard this log data. replace: Replace the undecodable part with a half-width question mark (?). xmlcharrefreplace: Use corresponding XML characters to replace undecodable parts | String | No | ignore | - |
{"field1": "Test in English and Chinese: qwertyuiopasdfghjklzxcvbnm QWERTYUIOPASDFGHJKLZXCVBNM special symbols:]] [! @#$%^&*()_++~"}
fields_set("field2", str_decode(str_encode(v("field1"))))
{"field1":"Test in English and Chinese: qwertyuiopasdfghjklzxcvbnm QWERTYUIOPASDFGHJKLZXCVBNM Special Symbols:]] [! @#$%^&*()_++~", "field2":"Test in English and Chinese: qwertyuiopasdfghjklzxcvbnm QWERTYUIOPASDFGHJKLZXCVBNM special symbols:]] [! @#$%^&*()_++~"}
base64_encode(value, format="RFC3548")
Parameter | Description | Type | Required | Default Value | Value Range |
value | String to be encoded | string | Yes | - | - |
format | Encoding format, supports RFC4648 (default), RFC3548 | string | No | RFC4648 | - |
{"field": "hello world"}
fields_set("encode", base64_encode(v("field")))
{"encode":"aGVsbG8gd29ybGQ=", "field":"hello world"}
base64_decode(value, format="RFC3548")
Parameter | Description | Type | Required | Default Value | Value Range |
value | The string to be decoded | string | Yes | - | - |
format | Decoding format, supports RFC4648 (default), RFC3548 | string | No | RFC4648 | - |
{"field": "aGVsbG8gd29ybGQ="}
fields_set("decode", base64_decode(v("field")))
{"decode":"hello world", "field":"aGVsbG8gd29ybGQ="}
Last updated:2024-01-20 17:44:35
geo_parse(field value, keep=("country","province","city"), ip_sep=",")
Parameter | Description | Type | Required | Default Value | Value Range |
data | IP value. Separate multiple IPs by separator. | string | Yes | - | - |
keep | The field to be reserved | string | No | ("country","province","city") | - |
ip_sep | The number of the expression in the match result | string | No | - | - |
{"ip":"101.132.57.150"}
fields_set("result", geo_parse(v("ip")))
{"ip":"101.132.57.150","result":"{\"country\":\"China\",\"province\":\"Shanghai\",\"city\":\"Shanghai\"}"}
{"ip":"101.132.57.150,101.14.57.157"}
fields_set("result", geo_parse(v("ip"),keep="province,city",ip_sep=","))
{"ip":"101.132.57.150,101.14.57.157", "result":"{\"101.14.57.157\":{\"province\":\"Taiwan\",\"city\":\"NULL\"},\"101.132.57.150\":{\"province\":\"Shanghai\",\"city\":\"Shanghai\"}}"}
is_subnet_of(IP range list, IP)
Parameter | Description | Type | Required | Default Value | Value Range |
IP range list | IP range. Separate multiple IP ranges by comma. | String | Yes | - | - |
IP | The IP to be checked | String | Yes | - | - |
{"ip": "192.168.1.127"}
log_keep(is_subnet_of("192.168.1.64/26",v("ip")))
{"ip": "192.168.1.127"}
{"ip": "192.168.1.127"}
fields_set("is_subnet",is_subnet_of("192.168.1.64/26",v("ip")))
{"ip": "192.168.1.127", "is_subnet":"true"}
{"ip": "192.168.1.127"}
fields_set("is_subnet",is_subnet_of("172.16.0.0/16",v("ip")))
{"ip": "192.168.1.127", "is_subnet":"false"}
{"ip": "192.168.1.127"}
fields_set("is_subnet",is_subnet_of("172.16.0.0/16,192.168.1.64/26",v("ip")))
{"ip": "192.168.1.127", "is_subnet":"true"}
Last updated:2025-12-03 18:30:47
res_local(param, default=None, type="auto")
Parameter Name | Parameter Description | Parameter Type | Required | Parameter Default Value | Parameter Value Range |
param | Field name corresponding to the environment variable in advanced configuration | string | Yes | - | - |
default | If the field value does not exist, return the value of this parameter. Default value: None. | string | Yes | None | - |
type | Data format for data output. auto (default): Convert the original value to JSON format. If conversion fails, return the original value. JSON: Convert the original value to JSON format. If conversion fails, return the parameter value of default. raw: Return the original value. | string | Yes | auto | - |
{}
fields_set("time_session", res_local("time_session"))
{"time_session":"30"}
res_rds_mysql(alias, database="database name", sql="select name from person_info", refresh_interval=0, base_retry_back_off=1, max_retry_back_off=60, update_time_key=None, use_ssl=False)
Parameter Name | Parameter Description | Parameter Type | Required | Parameter Default Value | Parameter Value Range |
alias | Configured database information alias | string | Yes | - | - |
database | Database name. | string | Yes | - | - |
sql | SQL statement to retrieve data | string | Yes | - | - |
refresh_interval | Fetch interval (unit: second). Default value is 0, which means only fetch once. | number | No | 0 | - |
base_retry_back_off | Failed to pull data, re-pull time interval. Default value is 1, unit: second. | number | No | 0 | - |
max_retry_back_off | The maximum time interval to retry request after failed to pull data. Default value is 60, unit: second, recommended to use the default value. | number | No | 60 | - |
update_time_key | Used for incremental data retrieval. If this parameter is not configured, perform a full update. | string | No | - | - |
use_ssl | Whether to use SSL protocol for secure connection | bool | No | False | - |

[{"user_id": 1},{"user_id": 3}]
//On the console, configure the alias of external data MySQL as hm, set the db of mysql to test222, and the table name to test//Pull all data from MySQL and use the t_table_map function to associate dimension tablest_table_map(res_rds_mysql(alias="hm",database="test222",sql="select * from test"),"user_id",["gameid", "game"])
[{"user_id":"1"},{"game":"wangzhe","gameid":"123","user_id":"3"}]
id | game_id | game_name | region | game_details |
1 | 10001 | Honor of Kings | CN | MOBA |
2 | 10002 | League of Legends | NA | PC MOBA |
3 | 10003 | Genshin Impact | CN | RPG |
4 | 10004 | Black Myth: Wukong | CN | PC Game |
5 | 10005 | Diablo | NA | Role play |
[{"id": 1},{"id": 2},{"id": 3}]
//On the console, configure the alias of external data MySQL as hm, set the db of mysql to test222, and the table name to test//select * from test where region='CN', pull data with region='CN' from MySQL, use t_table_map function to associate dimension tablet_table_map(res_rds_mysql(alias="hm",database="test222",sql="select * from test where region='CN'"),"id",["game", "game_details"])
[{"game_details":"MOBA mobile game""game_name":"Honor of Kings""id":"1"},{"id":"2"},{"game_details":"open world RPG game""game_name":"Genshin Impact""id":"3"}]
Last updated:2025-11-25 09:11:39
fields_set function is used to set field values and store the content processed by data processing function. For example, fields_set("A+B",op_add(v("Field A"),v("Field B"))) is to add the values of fields A and B, and the op_add function needs to leverage the fields_set function to complete result writing and storage.Last updated:2024-01-20 17:44:35
{"content": "[2021-11-24 11:11:08,232][328495eb-b562-478f-9d5d-3bf7e][INFO] curl -H 'Host: ' http://abc.com:8080/pc/api -d {\"version\": \"1.0\",\"user\": \"CGW\",\"password\": \"123\",\"interface\": {\"Name\": \"ListDetail\",\"para\": {\"owner\": \"1253\",\"orderField\": \"createTime\"}}}"}
fields_set("Action",regex_select(v("content"),regex="\{[^\}]+\}",index=0,group=0))fields_set("loglevel",regex_select(v("content"),regex="\[[A-Z]{4}\]",index=0,group=0)).fields_set("logtime",regex_select(v("content"),regex="\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}",index=0,group=0))fields_set("Url",regex_select(v("content"),regex="([a-z]{3}).([a-z]{3}):([0-9]{4})",index=0,group=0))fields_drop("content")
fields_set("Action",regex_select(v("content"),regex="\{[^\}]+\}",index=0,group=0))
fields_set("loglevel",regex_select(v("content"),regex="\[[A-Z]{4}\]",index=0,group=0)).
fields_set("logtime",regex_select(v("content"),regex="\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}",index=0,group=0))
fields_set("Url",regex_select(v("content"),regex="([a-z]{3}).([a-z]{3}):([0-9]{4})",index=0,group=0))
fields_drop("content")
{"Action":"{\"version\": \"1.0\",\"user\": \"CGW\",\"password\": \"123\",\"interface\": {\"Name\": \"ListDetail\",\"para\": {\"owner\": \"1253\",\"orderField\": \"createTime\"}","Url":"abc.com:8080","loglevel":"[INFO]","logtime":"2021-11-24 11:11:08,232"}
Last updated:2024-12-18 16:37:58
[{"content": {"App": "App-1","start_time": "2021-10-14T02:15:08.221","resonsebody": {"method": "GET","user": "Tom"},"response_code_details": "3000","bytes_sent": 69}},{"content": {"App": "App-2","start_time": "2222-10-14T02:15:08.221","resonsebody": {"method": "POST","user": "Jerry"},"response_code_details": "2222","bytes_sent": 1}}]
{"timestamp": 1732099684144000,"topic": "log-containers","records": [{"category": "kube-request","log": "{\"requestID\":\"12345\",\"stage\":\"Complete\"}"},{"category": "db-request","log": "{\"requestID\":\"67890\",\"stage\":\"Response\"}"}]}
[{"App":"App-1","user":"Tom"},{"App":"App-2","user":"Jerry"}]
[{"category":"kube-request","requestID":"12345","stage":"Complete","timestamp":"1732099684144000","topic":"log-containers"},{"category":"db-request","requestID":"67890","stage":"Response","timestamp":"1732099684144000","topic":"log-containers"}]
//Use the ext_json function to extract structured data from JSON data, by default, it will flatten all fieldsext_json("content")//Discard the content fieldfields_drop("content")//Discard unnecessary fields bytes_sent,method,response_code_details,start_timefields_drop("bytes_sent","method","response_code_details","start_time")
//Split logs from the array, splitting into 2 logslog_split_jsonarray_jmes("records")//Discard the original field recordsfields_drop("records")//Expand the KV pairs of the logext_json("log")//Discard the original field logfields_drop("log")
Last updated:2024-01-20 17:44:35
[{"__CONTENT__": "2021-11-29 15:51:33.201 INFO request 7143a51d-caa4-4a6d-bbf3-771b4ac9e135 action: Describe uin: 15432829 reqbody {\"Key\": \"config\",\"Values\": \"appisrunnning\",\"Action\": \"Describe\",\"RequestId\": \"7143a51d-caa4-4a6d-bbf3-771b4ac9e135\",\"AppId\": 1302953499,\"Uin\": \"100015432829\"}"},{"__CONTENT__": "2021-11-2915: 51: 33.272 ERROR request 2ade9fc4-2db2-49d8-b3e0-a6ea78ce8d96 has error action DataETL uin 15432829"},{"__CONTENT__": "2021-11-2915: 51: 33.200 INFO request 6059b946-25b3-4164-ae93-9178c9e73d75 action: UploadData hUWZSs69yGc5HxgQ TaskId 51d-caa-a6d-bf3-7ac9e"}]
fields_set("requestid",regex_select(v("__CONTENT__"),regex="request [A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+",index=0,group=0))fields_set("action",regex_select(v("__CONTENT__"),regex="action: \S+|action \S+",index=0,group=0))t_if(regex_match(v("__CONTENT__"),regex="uin", full=False),fields_set("uin",regex_select(v("__CONTENT__"),regex="uin: \d+|uin \d+",index=0,group=0)))t_if(regex_match(v("__CONTENT__"),regex="TaskId", full=False),fields_set("TaskId",regex_select(v("__CONTENT__"),regex="TaskId [A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+",index=0,group=0)))t_if(regex_match(v("__CONTENT__"),regex="reqbody", full=False),fields_set("requestbody",regex_select(v("__CONTENT__"),regex="reqbody \{[^\}]+\}")))t_if(has_field("requestbody"),fields_set("requestbody",str_replace(v("requestbody"),old="reqbody",new="")))fields_drop("__CONTENT__")fields_set("requestid",str_replace(v("requestid"),old="request",new=""))t_if(has_field("action"),fields_set("action",str_replace(v("action"),old="action:|action",new="")))t_if(has_field("uin"),fields_set("uin",str_replace(v("uin"),old="uin:|uin",new="")))t_if(has_field("TaskId"),fields_set("TaskId",str_replace(v("TaskId"),old="TaskId",new="")))
fields_set("requestid",regex_select(v("__CONTENT__"),regex="request [A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+",index=0,group=0))
fields_set("action",regex_select(v("__CONTENT__"),regex="action: \S+|action \S+",index=0,group=0))
t_if(regex_match(v("__CONTENT__"),regex="uin", full=False),fields_set("uin",regex_select(v("__CONTENT__"),regex="uin: \d+|uin \d+",index=0,group=0)))
t_if(regex_match(v("__CONTENT__"),regex="TaskId", full=False),fields_set("TaskId",regex_select(v("__CONTENT__"),regex="TaskId [A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+-[A-Za-z0-9]+",index=0,group=0)))
t_if(regex_match(v("__CONTENT__"),regex="reqbody", full=False),fields_set("requestbody",regex_select(v("__CONTENT__"),regex="reqbody \{[^\}]+\}")))
fields_drop("__CONTENT__")
t_if(has_field("requestbody"),fields_set("requestbody",str_replace(v("requestbody"),old="reqbody",new="")))
fields_set("requestid",str_replace(v("requestid"),old="request",new=""))
t_if(has_field("action"),fields_set("action",str_replace(v("action"),old="action:|action",new="")))
t_if(has_field("uin"),fields_set("uin",str_replace(v("uin"),old="uin:|uin",new="")))
t_if(has_field("tTaskId"),fields_set("TaskId",str_replace(v("TaskId"),old="TaskId",new="")))
[{"action":" Describe","requestid":" 7143a51d-caa4-4a6d-bbf3-771b4ac9e135","requestbody":" {\"Key\": \"config\",\"Values\": \"appisrunnning\",\"Action\": \"Describe\",\"RequestId\": \"7143a51d-caa4-4a6d-bbf3-771b4ac9e135\",\"AppId\": 1302953499,\"Uin\": \"100015432829\"}","uin":" 15432829"},{"action":" DataETL","requestid":" 2ade9fc4-2db2-49d8-b3e0-a6ea78ce8d96","uin":" 15432829"},{"action":" UploadData","requestid":" 6059b946-25b3-4164-ae93-9178c9e73d75","TaskId":" 51d-caa-a6d-bf3-7ac9e"}]
Last updated:2024-01-20 17:44:35
{"regex": "2021-12-02 14:33:35.022 [1] INFO org.apache.Load - Response:status: 200, resp msg: OK, resp content: { \"TxnId\": 58322, \"Label\": \"flink_connector_20211202_1de749d8c80015a8\", \"Status\": \"Success\", \"Message\": \"OK\", \"TotalRows\": 1, \"LoadedRows\": 1, \"FilteredRows\": 0, \"CommitAndPublishTimeMs\": 16}"}
ext_sepstr("regex", "f1, f2, f3", sep=",")fields_drop("regex")fields_drop("f1")fields_drop("f2")ext_sepstr("f3", "f1,resp_content", sep=":")fields_drop("f1")fields_drop("f3")ext_json("resp_content", prefix="")fields_drop("resp_content")
ext_sepstr("regex", "f1, f2, f3", sep=",")
fields_drop("regex")fields_drop("f1")fields_drop("f2")
ext_sepstr("f3", "f1,resp_content", sep=":")
fields_drop("f1")fields_drop("f3")
ext_json("resp_content", prefix="")
fields_drop("resp_content")
{"CommitAndPublishTimeMs":"16","FilteredRows":"0","Label":"flink_connector_20211202_1de749d8c80015a8","LoadedRows":"1","Message":"OK","Status":"Success","TotalRows":"1","TxnId":"58322"}
Last updated:2024-12-18 16:36:30

{"__FILENAME__": "","__SOURCE__": "192.168.100.123","message": "2024-10-11 15:32:10.003 DEBUG [gateway,746db87efd1bbcf5434cb9835c59e522,47c3036810e0c33b] [scheduled-Thread-1] c.i.g.c.f.d.a.task.AppleHealthCheckTask"}
{"__FILENAME__":"","__SOURCE__":"192.168.100.123","__TIMESTAMP__":"1728631930003","level":"DEBUG","service":"gateway","spanid":"47c3036810e0c33b","time":"2024-10-11 15:32:10.003","traceid":"746db87efd1bbcf5434cb9835c59e522"}
// Use the grok function to extract time, log level, service, traceid, and spanid from the logsext_grok("message",grok="%{TIMESTAMP_ISO8601:time} %{DATA:level} \[%{DATA:service},%{DATA:traceid},%{DATA:spanid}\]")// Delete message fieldfields_drop("message")// custom_cls_log_time function, use the new field time to replace the log time of CLS (__TIMESTAMP__)custom_cls_log_time(dt_to_timestamp(v("time"), zone="UTC+8"))
Last updated:2024-01-20 17:44:35
[{"message": "2021-12-09 11:34:28.279||team A is working||INFO||605c643e29e4||BIN--COMPILE||192.168.1.1"},{"message": "2021-12-09 11:35:28.279||team A is working ||WARNING||615c643e22e4||BIN--Java||192.168.1.1"},{"message": "2021-12-09 11:36:28.279||team A is working ||ERROR||635c643e22e4||BIN--Go||192.168.1.1"},{"message": "2021-12-09 11:37:28.279||team B is working||WARNING||665c643e22e4||BIN--Python||192.168.1.1"}]
log_drop(regex_match(v("message"),regex="team B is working",full=False))ext_sepstr("message","time,log,loglevel,taskId,ProcessName,ip",sep="\|\|")fields_drop("message")t_switch(regex_match(v("loglevel"),regex="INFO",full=True),log_output("info_log"),regex_match(v("loglevel"),regex="WARNING",full=True),log_output("warning_log"),regex_match(v("loglevel"),regex="ERROR",full=True),log_output("error_log"))
log_drop(regex_match(v("message"),regex="team B is working",full=False))
ext_sepstr("message","time,log,loglevel,taskId,ProcessName,ip",sep="\|\|")
fields_drop("message")
t_switch(regex_match(v("loglevel"),regex="INFO",full=True),log_output("info_log"),regex_match(v("loglevel"),regex="WARNING",full=True),log_output("warning_log"),regex_match(v("loglevel"),regex="ERROR",full=True),log_output("error_log"))
{"ProcessName":"BIN--COMPILE","ip":"192.168.1.1","log":"team A is working","loglevel":"INFO","taskId":"605c643e29e4","time":"2021-12-09 11:34:28.279"}
{"ProcessName":"BIN--COMPILE","ip":"192.168.1.1","log":"team A is working","loglevel":"INFO","taskId":"605c643e29e4","time":"2021-12-09 11:34:28.279"}
{"ProcessName":"BIN--Go","ip":"192.168.1.1","log":"team A is working ","loglevel":"ERROR","taskId":"635c643e22e4","time":"2021-12-09 11:36:28.279"}
Last updated:2024-12-03 19:08:26
{"Id": "dev@12345","Ip": "11.111.137.225","phonenumber": "13912345678"}
fields_set("Id",regex_replace(v("Id"),regex="\d{3}", replace="***",count=0))fields_set("Id",regex_replace(v("Id"),regex="\S{2}", replace="**",count=1))fields_set("phonenumber",regex_replace(v("phonenumber"),regex="(\d{0,3})\d{4}(\d{4})", replace="$1****$2"))fields_set("Ip",regex_replace(v("Ip"),regex="(\d+\.)\d+(\.\d+\.\d+)", replace="$1***$2",count=0))
fields_set("Id",regex_replace(v("Id"),regex="\d{3}", replace="***",count=0))
fields_set("Id",regex_replace(v("Id"),regex="\S{2}", replace="**",count=1))
fields_set("phonenumber",regex_replace(v("phonenumber"),regex="(\d{0,3})\d{4}(\d{4})", replace="$1****$2"))
fields_set("Ip",regex_replace(v("Ip"),regex="(\d+\.)\d+(\.\d+\.\d+)", replace="$1***$2",count=0))
{"Id":"**v@***45","Ip":"11.***.137.225","phonenumber":"139****5678"}
Last updated:2024-12-18 16:37:22
[{"path": "/eks/pod/running","clientIP": "1.139.21.123","method": "POST"},{"path": "/cmdb/login","clientIP": "1.139.21.123","method": "PUT"},{"path": "/cmdb/start","clientIP": "1.139.21.123","method": "GET"}]
//path contains 'cmdb' characters, keep the log, filter out the restlog_keep(regex_match(v("path"),regex="cmdb",full=False))//method contains POST or PUT characters, keep the loglog_keep(regex_match(v("method"),regex="POST|PUT",full=False))
{"clientIP":"1.139.21.123","method":"PUT","path":"/cmdb/login"}
Last updated:2025-12-03 18:30:47
Scenario | Logstash | Data Processing | |
rename field | mutate | mutate { rename => {"old_field_name" => "new_field_name" } } | |
Delete Field | | mutate { remove_field => ["password_hash"] } | |
update field value | | mutate { update => {"status_code" => "Not Found" status_code":"Not Found | |
extract key-value pairs - Grok | grok | grok { match => { "message" => "%{TIMESTAMP_ISO8601:time} %{LOGLEVEL:level} " }} | |
extract key-value pairs - Separator | split | mutate { split => { "message" => "|" } add_field => { "time" => "%{[message][0]}" "level" => "%{[message][1]}" "taskId" => "%{[message][2]}" "ProcessName" => "%{[message][3]}" "ip" => "%{[message][4]}" } remove_field => ["message"] } | fields_drop("message") |
extract key-value pairs - JSON | json | json { source => "message" target => "parsed_data" } | |
Delete log | drop | if [status] == 404 { //if status=404 Delete log } | ) |
Logical judgment | if else | if [log] //if the log field exists if "Cost" in [message] //when the message field contains "Cost" | t_if ( if "Cost" in [message] //when the message field contains "Cost" |
| or , and | if "Cost" in [message] or "cost" in [message] | str_exist(v(message), "cost", ignore_upper=False) ) |
Distribute logs to multiple sinks (target) | output | if [container] == "scm-pfc" { elasticsearch { hosts => ["xx.xx.x.xxx:9200"] index => "p-k8s" } } else { elasticsearch { hosts => ["xx.xx.x.xx:9200"] index => "p-container" }} | op_str_eq(v("container"),"scm-pfc"), log_output("p-container") //else branch ) |
Last updated:2024-01-20 17:44:35

Last updated:2024-01-20 17:44:35

Common SQL Time Window Expression (Suppose It's 12:06 Now) | SQL Time Window | Description |
@m-1h, @m | 11:06 - 12:06 | `@m` and `-1h` indicate to take the value down to the minute and subtract 1 hour, respectively. |
@h-1h,@h | 11:00 - 12:00 | `@h` and `-1h` indicate to take the value down to the hour and subtract 1 hour, respectively. |
@m-1h+20m,@h+25m | 11:26 - 12:25 | `@m`, `-1h`, `+20m`, `@h`, and `+25m` indicate to take the value down to the minute, subtract 1 hour, add 20 minutes, take the value down to the hour, and add 25 minutes, respectively. |

Last updated:2024-01-20 17:44:35
Last updated:2024-01-20 17:44:35
demo-scf1.txt, and import it to the source CLS service.Last updated:2024-01-20 17:44:35
Data Shipping Format | Description | Recommended Scenario |
Log data is shipped to COS based on the specified separator, such as space, tab, comma, semicolon, and vertical bar. | It can be used for computing in Data Lake Compute. It can be used to ship raw logs (logs collected in a single line, in multiple lines, and with separators). | |
Log data is shipped to COS in JSON format. | It is a common data format and can be selected as needed. | |
Log data is shipped to COS in Parquet format. | Log data needs to be structured data. The data type can be converted (data not collected in a single line or multiple lines). This format is mainly used for Hive batch processing. |
Last updated:2024-01-20 17:44:35
CLS_QcsRole role. You can use the search box in the top-right corner of the role list to search for the role.QcloudCOSAccessForCLSRole and QcloudCKAFKAAccessForCLSRole permissions.cls.cloud.tencent.com.
If there is no such role or permission, create one as instructed below.clsrole to search for data, select the QcloudCKAFKAAccessForCLSRole and QcloudCOSAccessForCLSRole policies in the search result, and click Next.CLS_QcsRole and click Complete.Last updated:2025-12-03 18:30:48
Configuration Item | Description | Rule | Required |
Shipping Task Name | Name of the delivery task. | / | Required |
Time range | Start time: The start time of the log data you want to ship. The default provides the earliest time point in the log topic lifecycle. End time: The end time of the log data you want to ship. Future time cannot be selected. Not specified means continuous shipping of logs. Note: If your start time is historical time and end time is not specified, the task will continuously ship both historical logs and real-time logs. For example: If you choose to deliver data from 00:03 on January 1, 2023 to ∞, and submit the delivery task at 19:05 on February 13, 2023, then the historical logs will be from 00:03 on January 1, 2023 to 19:05 on February 13, 2023, and the real-time logs will be from 19:05 on February 13, 2023 to ∞. The two kinds of data will be delivered to COS simultaneously. You may view the shipping progress and required duration of historical data delivery in the delivery task list. After task submission, the delivery time range cannot be modified . | Select Time | Required |
File Size | The size of the raw log file to be delivered works in conjunction with the delivery interval time. Whichever condition is met first will trigger the rule to compress the file, and then deliver it to COS. For example, if you configure 256MB and 15 minutes, and the file size reaches 256MB in 5 minutes, then the file size condition will trigger the delivery task first. | 5 - 256, unit: MB. | Required |
Shipping Interval | Specify the interval to trigger a delivery. This works with the delivery file size. Whichever condition is met first will trigger the rule to compress the file and then deliver it to COS. For example, if you configure 256MB and 15 minutes, and the file size is only 200MB after 15 minutes, then the interval time condition will trigger the delivery task first. | 5 - 15 minutes | Required |
Configuration Item | Description | Rule | Required |
Target COS Bucket Ownership | Current Root Account Deliver CLS logs to the current root account's COS bucket. Other Root Account Deliver CLS logs to another root account's COS bucket. For example, to deliver CLS logs from account A to account B's COS bucket, account B must configure an access role in Cloud Access Management (CAM). After configuration, account A needs to enter the Role ARN and external ID in the CLS console to enable cross-account delivery. The steps to configure the role are as follows: 1. Create role. Account B is logged in to the CAM role management page. 1.1 Create an access policy, with a policy name such as cross_shipper. See the following for policy syntax: Note: Note: The authorization in the example follows the minimum permission principle, with the resource configured as shipping to only the COS bucket test123-123456789 in the Guangzhou region. Please authorize according to the actual situation.
1.2 Create a new role, select Tencent Cloud account as the role carrier, choose other root account for the cloud account type, then input the ID of Account A, such as 100012345678, check enable verification and configure the external ID, for example Hello123. 1.3 Configure the access policy for the role, select the pre-configured access policy cross_shipper (example). 1.4 Save the role, for example: uinA_writeCLS_to_COS. 2. Configure the carrier for the role. In the CAM role list, find uinA_writeCLS_to_COS (example), click the role, select role carrier > management carrier > add product service, choose CLS, then click update. See current role-based carriers: one is account A, the other is cls.cloud.tencent.com (CLS log service). 3. Log in to CLS with account A and fill in Role ARN and external ID. The following two items need to be provided by account B: Account B finds the role uinA_writeCLS_to_COS (example) in the CAM role list, clicks to view the RoleArn of the role, such as qcs::cam::uin/100001112345:roleName/uinA_writeCLS_to_COS. The external ID, such as Hello123, can be viewed in the role carrier. Note: Enter Role ARN and external ID. Be careful not to include extra spaces, otherwise lead to permission verification failed. Cross-account delivery will generate read traffic fees for log topics under account A. | current root account other root account | Optional |
COS Bucket | The destination bucket for shipping logs. For cross-account delivery, the user must manually fill in the target bucket's name. | List selection | Required |
File naming | Delivery time naming: default option, such as 202208251645_000_132612782.gz means delivery time_log topic partition_offset. Hive can load this file. Random number naming: the old naming method, Hive may not recognize it. Hive does not recognize files starting with an underscore. You can add a custom prefix in the COS path configuration, such as /%Y%M%d/%H/Yourname. | / | Required |
File Compression | No compression/snappy/lzop/gzip | / | Required |
COS path | The path for storing logs in a COS storage bucket. By default, logs are saved in the format /year/month/day/hour/, for example /2022/7/31/14/. Path configuration supports strftime syntax, such as:The path generated by /%Y/%m/%d/ is /2022/7/31/. The path generated by /%Y%M%d/%H/ is /20220731/14/. | Do not start with / | Optional |
Storage Class | The log storage types in a COS bucket: standard storage, infrequent storage, intelligent tiering storage, archive storage, deep archive storage. For details, see storage type overview. Note: Cross-account delivery does not support selection of COS storage type. | List selection | Required |
Format of Data to Ship | Application Scenarios |
Applicable to Tencent Cloud DLC data ingestion calculation Use CSV delivery to implement original log text (single-line, multi-line, logs collected by delimiter). | |
For common data formats, see your business scenarios for selection. | |
Log data must be structured data, supporting data type conversion (not single-line or multi-line collection), and is mostly used for Hive. |
Escape Option | Description |
Do not escape | Make no changes to your JSON structure and hierarchy, and keep the log format consistent with that on the collection side. Example: Original log text: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Deliver to COS: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Note: When the first-layer node in JSON contains a numeric value, it will automatically convert to int or float after delivery. Original Log Text: {"a":123, "b":"123", "c":"-123", "d":"123.45", "e":{"e1":123,"f1":"123"}} Deliver to COS: {"a":123,"b":123,"c":-123,"d":123.45,"e":{"e1":123,"f1":"123"}} |
Escape | Convert the value of the first-layer JSON node to String. If your node value is Struct, you need to convert it into String in advance during downstream storage or calculation. You can select this option. Example 1: Original log text: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Deliver to COS: {"a":"aa","b":"{\"b1\":\"b1b1\", \"c1\":\"c1c1\"}"} Example 2: Original Log Text: {"a":123, "b":"123", "c":"-123", "d":"123.45", "e":{"e1":123,"f1":"123"}} Deliver to COS:{"a":"123","b":"123","c":"-123","d":"123.45","e":"{\"e1\":123,\"f1\":\"123\"}"} |
Configuration Item | Description | Rule | Required |
Key | Specify the key-value (key) field to be written in CSV file (the filled key must be the structured key name or reserved field of the log, otherwise it will be regarded as an invalid key) | Only letters, digits, and _-./@ are supported. | Required |
Separator | Separator between fields in the CSV file. | List selection | Required |
Escape Character | If a delimiter character appears in the normal field, it needs to be wrapped with an escape character to prevent misidentification when reading data. | List selection | Required |
Invalid Field Filling | If the configured key-value field (key) does not exist, it will be filled with invalid fields. | List selection | Required |
Key in First Line | Add a description of the field name to the first line of the CSV file, that is, write the key-value (key) into the first line of the CSV file. It is not written by default. | ON/OFF | Required |
Configuration Item | Description | Required |
Key name | Write key-value fields to the Header part of a Parquet file. If the key-value from logs automatically pulled by the system does not meet your requirements, you can add fields (up to 100). Field names only support letters, digits, and _-./@. If a certain line of log lacks a defined Key, there will be no such Key in the Parquet file Body corresponding to that log. This will not affect your big data computing frameworks such as Spark and Flink. | Required |
Data type | The field data type in Parquet file: String, Boolean, Int32, Int64, Float, Double | Required |
Assignment information returned upon resolution failure | The default value when data parsing fails. For example, if a field value (String type) fails to parse, you can specify it as an empty String "", NULL, or a custom String. The same applies to other data types: boolean, integer, and floating-point. | Required |
Last updated:2025-12-03 18:30:48
Log Collection Format | Support for Raw Log Shipping |
Full text in a single line | |
Full text in multiple lines | |
Separator (CSV) format | It depends. For more information, see CSV format. Only space, tab, comma, semicolon, and vertical bar are supported as raw log separators. |
JSON format | No. |
Full regex | No. |
__CONTENT__ field, delete other fields, set Separator to Space, set Escape Character to None, set Invalid Field Filling to None, and disable Key in First Line.Configuration Item | Description | Remarks |
Key | __CONTENT__ | For full text in a single line or multiple lines, __CONTENT__ is used as the default key, and the raw log is used as the value. When the raw log is shipped, only the __CONTENT__ field is retained. |
Separator | Space | Set Separator to Space for the full text in a single line or multiple lines. |
Escape Character | None | To prevent the raw log from being modified due to escape characters, set Escape Character to None. |
Invalid Field Filling | None | Set Invalid Field Filling to None. |
Key in First Line | Disabled | You don't need to add a description of the field name in the first line of the CSV file for raw log shipping. |
Configuration Item | Value | Description |
Key | Keys | Only the user field is retained. |
Separator | A value selected from the drop-down list | Select the separator of the raw log content. If separators are different, raw log shipping is not supported. Currently, only space, tab, comma, semicolon, and vertical bar are supported. |
Escape Character | None | To prevent the raw log from being modified due to escape characters, set Escape Character to None. |
Invalid Field Filling | None | Set Invalid Field Filling to None. |
Key in First Line | Disabled | You don't need to add a description of the field name in the first line of the CSV file for raw log shipping. |
Last updated:2025-12-03 18:30:48
1 indicates a success.10001 indicates that the COS bucket doesn't exist. You should check the validity of the bucket.10002 indicates that you have no permission to access the COS bucket. You should make sure that you have the permission.10003 indicates an internal error. You should try again. If the problem persists, submit a ticket.Last updated:2025-12-03 18:30:48

Last updated:2025-07-03 20:10:49
Configuration Item | Description | Rule | Required |
Target CKafka Topic Ownership | Current root account Deliver CLS logs to the current root account's CKafka Topic. Another root account Deliver CLS logs to another root account's CKafka. For example, if account A ships logs to account B's CKafka Topic via CLS, account B must configure an access role in Cloud Access Management (CAM). After configuration, account A needs to enter the Role ARN and external ID in the CLS console to enable cross-account delivery. The steps to configure the role are as follows: 1. Create new role. Account B logged in to the CAM role management page. 1.1 Create an access policy, with a policy name such as cross_shipper. For policy syntax, see the following: Note: Note: The authorization in the example follows the minimum permission principle, with the resource configured as shipping to only the CKafka instance (ckafka-12abcde3) in the Guangzhou region. Please authorize according to the actual situation.
1.2 Create a new role, select Tencent Cloud account as the role carrier, choose other root account for the cloud account type, then input Account A's ID, such as 100012345678, check enable verification and configure the external ID, for example: Hello123. 1.3 Configure role policy, configure access policy for the role, and select the pre-configured access policy cross_shipper (example). 1.4 Save the role, for example: uinA_writeCLS_to_CKafka. 2. Configure the carrier for the role. In the CAM role list, find uinA_writeCLS_to_CKafka (example), click the role, select role carrier > management carrier > add product service > CLS, then click refresh. The current role's carriers are two: account A and cls.cloud.tencent.com (CLS log service). 3. Account A logs in to CLS and fills in Role ARN and external ID. The two items of info need to be provided by account B: Account B finds the role uinA_writeCLS_to_CKafka (example) in the CAM role list, clicks to view the RoleArn of the role, such as qcs::cam::uin/100001112345:roleName/uinA_writeCLS_to_CKafka. The external ID, such as Hello123, is visible in the role carrier. Note: Fill in the Role ARN and external ID. Note: Do not enter extra spaces, as this will cause permission verification to fail. Cross-account delivery will generate read traffic fees for the log topic under Account A. | current root account other root account | No |
CKafka instance | The CKafka Topic in the same region as the current log topic is used as the delivery target. In the cross-account delivery scenario, the user manually fills in the CKafka instance ID and Topic name. | List selection | Required |
Format of Data to Ship | Select Original content to deliver the user's raw logs. | List selection | Required |
Data compression format | no compression\SNAPPY\LZ4. | List selection | Required |
Shipping log preview | Preview your delivered log data. | - | - |
Configuration Item | Description | Rule | Required |
Target CKafka Topic Ownership | Current Root Account Deliver CLS logs to the current root account's CKafka Topic. Other Root Account Ship CLS logs to another root account's CKafka. For example, if account A ships logs from CLS to account B's CKafka Topic, account B must configure a role in CAM (Access Management). After configuration, account A needs to enter the Role ARN and external ID in the CLS console to enable cross-account delivery. The steps to configure the role are as follows: 1. Create role. Account B logged in to the CAM role management page. 1.1 Create an access policy, with a policy name such as cross_shipper. For policy syntax, see the following: Note: Note: The authorization in the example follows the minimum permission principle, with the resource configured as shipping to only the CKafka instance (ckafka-12abcde3) in the Guangzhou region. Please authorize according to the actual situation.
1.2 Create a new role, select Tencent Cloud account as the role carrier, choose other root account for the cloud account type, then input Account A's ID, such as 100012345678, check enable verification and configure the external ID, for example: Hello123. 1.3 Configure role policy, configure access policy for the role, and select the pre-configured access policy cross_shipper (example). 1.4 Save the role, for example: uinA_writeCLS_to_CKafka. 2. Configure a carrier for the role. Find uinA_writeCLS_to_CKafka (example) in the CAM role list, click the role, select role carrier > Entity management > add product service > CLS, then click update. The current role's carriers are two: account A and cls.cloud.tencent.com (CLS log service). 3. Account A logs in to CLS and fills in Role ARN and external ID. The two items of info need to be provided by account B: Account B finds the role uinA_writeCLS_to_CKafka (example) in the CAM role list, clicks to view the RoleArn of the role, such as qcs::cam::uin/100001112345:roleName/uinA_writeCLS_to_CKafka. The external ID, such as Hello123, is visible in the role carrier. Note: Fill in the Role ARN and external ID. Note: Do not enter extra spaces, as this will cause permission verification to fail. Note: Cross-account delivery will generate read traffic fees for the log topic under Account A. | current root account other root account | No |
CKafka instance | The CKafka Topic in the same region as the current log topic is used as the delivery target. | List selection | Required |
Format of Data to Ship | Option JSON, deliver logs in JSON format. | List selection | Required |
Escape/Do not escape in JSON format | Escape: Convert the value of the first-level nodes in the JSON to String. If the value of your first-level nodes is Struct and you need to convert the Struct to String in downstream storage or computation, you can select this option. Examples: Original log: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Deliver to CKafka: {"a":"aa","b":"{\"b1\":\"b1b1\", \"c1\":\"c1c1\"}"} Non-escaping, do not modify your JSON structure and hierarchy, keep the log format consistent with that on the collection side. Example: Original log: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Deliver to CKafka: {"a":"aa", "b":{"b1":"b1b1", "c1":"c1c1"}} Note: When the first-layer node of JSON contains a numeric value, it will automatically convert to int or float after delivery. Original log: {"a":123, "b":"123", "c":"-123", "d":"123.45", "e":{"e1":123,"f1":"123"}} Deliver to CKafka: {"a":123,"b":123,"c":-123,"d":123.45,"e":{"e1":123,"f1":"123"}} | List selection | Required |
Log Fields to Ship | Flatten or not flatten the __TAG__ metadata based on your actual business scenario. __TAG__ meta information: {"__TAG__":{"fieldA":200,"fieldB":"text"}} Flatten: {"__TAG__.fieldA":200,"__TAG__.fieldB":"text"} Not Tiled: {"__TAG__":{"fieldA":200, "fieldB":"text"}} | | |
Data compression format | no compression\SNAPPY\LZ4. | List selection | Required |
Shipping log preview | Preview your delivered log data. | - | - |
Last updated:2025-12-03 18:30:48


Last updated:2025-12-03 18:30:48


Last updated:2025-06-27 19:12:54
Basic Configuration Item | Description | Rule | Required |
Shipping Task Name | Name of the delivery task. | - | Required |
Service log | Write the monitoring metrics of the delivery task running to the free log topic cls_service_log. | - | No |
Shipping Mode | Currently only support Batch Shipping. | - | No |
File Size | The size of the raw log file to be delivered works in conjunction with the delivery interval time. Whichever condition is met first will trigger the rule to compress the file, and then deliver it to DLC. For example, if you configure 256MB and 15 minutes, and the file size reaches 256MB in 5 minutes, then the file size condition will trigger the delivery task first. | 5 - 256, unit: MB. | No |
Shipping Interval | Specify the interval to trigger a delivery. This works with the delivery file size. Whichever condition is met first will trigger the rule to compress the file and deliver it to DLC. For example, if you configure 256MB and 15 minutes, and the file size is only 200MB after 15 minutes, then the interval time condition will trigger the delivery task first. | 300 - 900, unit: s. | No |
Data Table Configuration Item | Description | Rule | Required |
Data Catalogs | Currently, only DataLakeCatalog is supported. | - | No |
Database | Select your DLC database. | - | Required |
Data Table | Select your DLC data table. | - | Required |
data Field | Log Field Name: Map fields in the CLS log to corresponding DLC fields, as shown in the log in figure: Only support filling in the key of the first-layer node in JSON, such as app_name. Nested nodes like details.request_id are not supported.
Preview Logs: Click this button to view log samples (JSON format) on the right side of the page, helping you select fields and fill in log field names. Data Table Field Name: The data table field is read from DLC and cannot be modified here. Please go to DLC to edit. Field Type: Type of DLC field, not supported here. Please go to DLC to modify. Assignment information returned upon resolution failure: NULL/empty/custom value. If the user's raw field cannot be parsed as the specified type, the system will attempt to parse the unresolved field. If the unresolved field also cannot be parsed, it will fill the default zero value. int/bigint: corresponds to 0 float/double/decimal: corresponds to 0 date: today timestamp: corresponding current timestamp Enable Mapping: whether to map this field to the DLC table. If not required, toggle off. | - | Required |
Partition Field | Log Field Name: the field name of the log, used for Mapping the partition field in DLC. If your DLC table is partitioned by time, we recommend using the log time field here, such as __TIMESTAMP__. Data Table Field Name: partition field, read from DLC, cannot be modified here. Please go to DLC to edit. Field Type: type of partition field, read from DLC, not supported here. Please go to DLC to modify. | - | Required |
Last updated:2025-11-18 11:23:06
Basic Configuration Item | Required | Description | Example |
Shipping Task Name | Required | Name of the shipping task. | Ship to splunk-test123. |
Log Topic | No | Shipped log topic. | Guangzhou/test123. |
Data Format | No | Original log in JSON format. When selecting the JSON format for log shipping, you can choose to either include or exclude the following fields: CLS reserved fields and the __PKGID, __PKGLOGID, and __TAG__ fields used to indicate the log sequence. After configuring the required fields, you can preview the shipped logs below. | - |
Service log | No | Write the monitoring metrics of the shipping task to the free log topic cls_service_log. | Enable the switch. |
Shipping log preview | No | Preview the logs you need to ship. | {"a":123,"b":123,"c":-123,"d":123.45,"e":{"e1":123,"f1":"123"}} |
Target Configuration Item | Required | Description | Example |
Access method | Required | Private network: Splunk is deployed in Tencent Cloud or connected via Direct Connect or CCN. Public network: generally refers to Splunk Cloud Platform, accessed over the public network. | - |
Network service type | Required | When you select the private network, you need to configure the network service type: CLB: The service is forwarded through Cloud Load Balancer (CLB). CVM: The service is deployed directly on Cloud Virtual Machine (CVM) or Tencent Kubernetes Engine (TKE). CCN: The service is connected to Tencent Cloud through CCN. Direct Connect Gateway: The service is connected to Tencent Cloud through Direct Connect Gateway. | - |
Network | Required | When you select the private network, you need to configure the associated network. Select the VPC where your Splunk resides. | - |
Splunk HEC Service Address | Required | 10.0.0.113:8088 | |
HEC Token | Required | 59f9bXXc-ae2f-43c1-8c93-4360XXXX3ef1 | |
Authentication mechanism | No | If you enable SSL authentication in the Splunk HEC configuration, select SSL. | SSL |
Connectivity check | Required | Click the button. The task can be submitted only after the connectivity check is passed. | - |
Target Configuration Item | Required | Description | Example |
Enable Indexer Acknowledgment | No | Splunk processes the next batch of data only after confirming that the data from the HEC has been written to the index. If indexer acknowledgment is enabled in the HEC flag, check the option to enable this feature. | - |
Custom URI | Required | If you use a custom URI in Splunk, enter the URI. | https://splunk-log.example.com:8088/services/collector |
Data Source | Required | Location where logs are generated, such as a directory, network port, or program name. | /var/log/syslog |
Source Type | Required | Log data format/structure, which determines how Splunk parses data. | syslog or json |
Index Name | Required | Write CLS data to the index. | test_index |
Last updated:2025-11-25 19:15:50

Configuration Item | Description | Rule |
Consumption data format | JSON: Consume logs in JSON data format. Original content: Consume logs in their original format. | Select |
Data Range | Historical + latest: New version, can consume all data within the log topic lifecycle. Latest: Earlier version, can only consume the latest data. Note: Two log topics with different data ranges cannot use the same consumption group. For example: if Log Topic A is configured as Historical + latest and Topic B as latest, Log Topic A and B cannot use the same consumption group. | Select |
Consume log fields | Please select the log fields you need to consume. JSON format escape/non-escaping as follows: Escape, convert the value of the first-layer JSON node to String. If the value of your first-level nodes is Struct, you can choose this option to convert the Struct to String in advance during downstream storage or calculation. Non-escaping, do not modify your JSON structure and hierarchy, keep the log format consistent with that on the collection side. Note: When the first-layer node of JSON contains a numeric value, it will automatically convert to int or float after consumption. Log: {"a":123, "b":"123", "c":"-123", "d":"123.45", "e":{"e1":123,"f1":"123"}} Consumption: {"a":123,"b":123,"c":-123,"d":123.45,"e":{"e1":123,"f1":"123"}} Flatten or not flatten __TAG__ metadata as follows. __TAG__ meta information: {"__TAG__":{"fieldA":200,"fieldB":"text"}} tiling: {"__TAG__.fieldA":200,"__TAG__.fieldB":"text"} Not tiling: {"__TAG__":{"fieldA":200, "fieldB":"text"}} | Select |
Data compression format | Supports SNAPPY, LZ4, and No compression. | Select |
Extranet Consumption | After disabling, you cannot access consumption logs from the public network and can only consume over the private network. | Switch |
Consumption Log Preview | Preview your consumed log data. | - |
Service log | Related logs for consumption monitoring charts. Data is provided free by CLS. | Switch |
Parameter | Description |
User authentication mode | Currently, only SASL_PLAINTEXT is supported. |
hosts | Intranet consumption: kafkaconsumer-${region}.cls.tencentyun.com:9095 Public network consumption: kafkaconsumer-${region}.cls.tencentcs.com:9096. For details, see Log Consumption - Consumption Over Kafka Protocol. |
topic | Consumption topic ID, please copy it from the console for consumption over Kafka. Example: XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX. |
username | Configured as ${LogSetID}, the logset ID. Example: 0f8e4b82-8adb-47b1-XXXX-XXXXXXXXXX, please copy it from the console for consumption over Kafka. |
password | Configured as ${SecretId}#${SecretKey}. For example: XXXXXXXXXXXXXX#YYYYYYYY, please log in to Cloud Access Management and click Access Keys in the left sidebar. The API key or project key can be used.If your sub-account needs to use this feature, it is recommended to use a sub-account key. When authorizing the sub-account, configure both the action and resource in the access policy to the minimum permissible range. For more details, see Kafka Protocol Consumption Authorization. |
Last updated:2025-12-03 18:30:48





Last updated:2025-12-03 18:30:48
Parameter Name | Default Value | Description |
auto.offset.reset | latest | Earliest: Automatically reset to the earliest offset. Latest: Automatically reset to the latest offset. |
enable.auto.commit | true | If true, the consumer offset will be submitted periodically in the backend. |
auto.commit.interval.ms | 5000 ms | If enable.auto.commit is set to true, the frequency of automatic commit for consumer offset (ms). |
Parameter Name | Default Value | Description |
fetch.max.wait.ms | 500 ms | Maximum waiting time for consumers to pull messages |
fetch.min.bytes | 1MB | The maximum data volume a server returns for a single request. If there is no sufficient data, the request will wait. |
fetch.max.bytes | 50MB | The maximum data volume a server returns for a single request. Too small (such as 1M): The amount of data is small every time, requiring more requests to obtain sufficient data, increasing the number of sessions on the server. Too large (such as 50M+): May exceed the client's processing capability, causing processing timeout, consumption backlog, and in extreme cases, repeatedly request the same batch of data from the server, leading to an increase in your metered billing. |
request.timeout.ms | 30000 ms | Request timeout period. Use together with fetch.max.bytes. Too short (such as 5s): May cause batch processing to return before reaching fetch.max.bytes, reducing processing efficiency. Too long (such as 60s): Increases message processing latency. |
max.poll.records | 5000 | Maximum record count returned by a single poll() call. |
session.timeout.ms | 10000 ms | Consumer session timeout period with Kafka server (ms). |
heartbeat.interval.ms | 3000 ms | Consumer heartbeat interval (ms). |
Parameter Name | Default Value | Description |
reconnect.backoff.ms | 50 ms | Initial backoff time to reconnect to server (ms). |
retry.backoff.ms | 100 ms | Waiting time before retry failure (ms). |
max.poll.interval.ms | 120000 ms | The upper limit between two poll() calls. If exceeded, the consumer will be considered failed and trigger a rebalance. |
Last updated:2025-12-03 18:30:48
import uuidfrom kafka import KafkaConsumer,TopicPartition,OffsetAndMetadataconsumer = KafkaConsumer(# Topic name provided by the cls kafka protocol consumption console, such as XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX, can be copied from the console'Your consumption topics',group_id = 'your consumer group name',auto_offset_reset='earliest',# Service address + port, public network port 9096, private network port 9095, example is intranet consumption, please fill in according to your actual situationbootstrap_servers = ['kafkaconsumer-${region}.cls.tencentyun.com:9095'],security_protocol = "SASL_PLAINTEXT",sasl_mechanism = 'PLAIN',# username is logset ID, such as ca5cXXXXdd2e-4ac0af12-92d4b677d2c6sasl_plain_username = "${logsetID}",The password is a string composed of the user's SecretId#SecretKey, such as AKID********************************#XXXXuXtymIXT0Lac. Be careful not to lose the #. Use sub-account keys. When the root account authorizes the sub-account, follow the principle of least privilege. The actions and resources in the sub-account access policy should be configured to the minimum range to fulfill the operations.sasl_plain_password = "${SecretId}#${SecretKey}",api_version = (0,10,1))print('begin')for message in consumer:print('begins')print ("Topic:[%s] Partition:[%d] Offset:[%d] Value:[%s]" % (message.topic, message.partition, message.offset, message.value))print('end')
from kafka import KafkaConsumerimport threadingTOPIC_NAME = 'Your consumption topics'GROUP_ID = 'your consumer group name'# Service address + port, public network port 9096, private network port 9095, example is intranet consumption, please fill in according to your actual situationBOOTSTRAP_SERVERS = ''kafkaconsumer-${region}.cls.tencentyun.com:9095''def consume_messages(thread_id):# Create a Kafka consumer instanceconsumer = KafkaConsumer(TOPIC_NAME,group_id=GROUP_ID,bootstrap_servers=BOOTSTRAP_SERVERS,value_deserializer=lambda m: m.decode('utf-8'),auto_offset_reset='earliest',security_protocol = "SASL_PLAINTEXT",sasl_mechanism = 'PLAIN',sasl_plain_username = "${logsetID}"",sasl_plain_password = "${SecretId}#${SecretKey}",api_version = (2, 5, 1))try:for message in consumer:print(f"Thread {thread_id}: partition = {message.partition}, offset = {message.offset}, value = {message.value}")except KeyboardInterrupt:passfinally:# Stop the consumer.consumer.close()if __name__ == "__main__":Start 3 consumer threads. This is an example. Please configure according to actual conditions.num_consumers = 3threads = []for i in range(num_consumers):thread = threading.Thread(target=consume_messages, args=(i,))threads.append(thread)thread.start()# Wait for all threads to completefor thread in threads:thread.join()
${SecretId}#${SecretKey} is followed by (; semicolon). Do not leave any field out, otherwise an error will be reported.<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>2.5.0</version></dependency>import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;import java.time.Duration;import java.util.Collections;import java.util.Properties;public class KafkaConsumerGroupTest {public static void consume() {Properties props = new Properties();String logset_id = "${logsetID}";// Topic name for display on the page of kafka protocol consumption in the CLS consoleString topic_id = "Your consumption topics";String accessKeyID = System.getenv("${SecretId}");String accessKeySecret = System.getenv("${SecretKey}");String groupId = "your consumer group name";// Service address + port, public network port 9096, private network port 9095, example is intranet consumption, please fill in according to your actual situationString hosts = "kafkaconsumer-${region}.cls.tencentyun.com:9095";props.put("bootstrap.servers", hosts);props.put("security.protocol", "SASL_PLAINTEXT");props.put("sasl.mechanism", "PLAIN");props.put("sasl.jaas.config","org.apache.kafka.common.security.plain.PlainLoginModule required username=\"" +logset_id + "\" password=\"" + accessKeyID + "#" + accessKeySecret + "\";");// Kafka consumer configurationprops.put("group.id", groupId);props.put("enable.auto.commit", "true");props.put("auto.commit.interval.ms", "5000");props.put("session.timeout.ms", "10000");props.put("auto.offset.reset", "earliest");props.put("max.poll.interval.ms", "120000");props.put("heartbeat.interval.ms", "3000");props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");// Create a Kafka consumer instanceKafkaConsumer<String,String> consumer = new KafkaConsumer<String,String>(props);consumer.subscribe(Collections.singletonList(topic_id));while(true) {ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(10000));for (ConsumerRecord<String, String> record : records) {System.out.println("Received message: (" + record.key() + ", " + record.value() + ") at offset " + record.offset());}}}public static void main(String[] args){consume();}}
${SecretId}#${SecretKey} is followed by (; semicolon). Do not leave any field out, otherwise an error will be reported.package mainimport ("context""fmt""github.com/Shopify/sarama""log""os""os/signal""syscall")func main() {// Create Sarama consumer configuration//TOPIC_NAME is your consumption topic, view it in the console.topicName := "${TOPIC_NAME}"//GROUP_ID is your consumer group namegroupID := "${GROUP_ID}"//BOOTSTRAP_SERVERS is the consumption service host address and port, public network port 9096, private network port 9095, such as kafkaconsumer-${region}.cls.tencentyun.com:9095endpoint := "${BOOTSTRAP_SERVERS"config := sarama.NewConfig()config.Net.SASL.Enable = trueconfig.Net.SASL.User = "${logsetID}"config.Net.SASL.Password = "${SecretId}#${SecretKey}"config.Net.SASL.Mechanism = sarama.SASLTypePlaintextconfig.Consumer.Group.Rebalance.Strategy = sarama.BalanceStrategyRoundRobinconfig.Version = sarama.V1_1_1_0config.Consumer.Offsets.Initial = sarama.OffsetNewest// Create Sarama consumersarama.Logger = log.New(os.Stdout, "[Sarama] ", log.LstdFlags)consumer, err := sarama.NewConsumerGroup([]string{endpoint}, groupID, config)if err != nil {log.Fatal(err)}defer consumer.Close()// Process received messageshandler := &ConsumerGroupHandler{}signals := make(chan os.Signal, 1)signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)go func() {for {err := consumer.Consume(context.Background(), []string{topicName}, handler)if err != nil {log.Fatal(err)}if handler.ready {break}}}()<-signalsfmt.Println("Exiting...")}// ConsumerGroupHandler implements the sarama.ConsumerGroupHandler APItype ConsumerGroupHandler struct {ready bool}// Setup is called before the consumer group starts upfunc (h *ConsumerGroupHandler) Setup(sarama.ConsumerGroupSession) error {h.ready = truereturn nil}// Cleanup is called after the consumer group stopsfunc (h *ConsumerGroupHandler) Cleanup(sarama.ConsumerGroupSession) error {h.ready = falsereturn nil}// ConsumeClaim consumes messages from the Claimfunc (h *ConsumerGroupHandler) ConsumeClaim(session sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {for message := range claim.Messages() {fmt.Printf("Received message: %s\n", string(message.Value))session.MarkMessage(message, "")}return nil}
Last updated:2025-12-03 18:30:48
filebeat.inputs:- type: kafkahosts:- kafkaconsumer-${region}.cls.tencentyun.com:9095topics: "your consumption topics"group_id: "your consumer group name"username: "${logsetID}"password: "${SecretId}#${SecretKey}"sasl.mechanism: "PLAIN"processors:- decode_json_fields:fields: ["message"]target: ""overwrite_keys: trueoutput.file:path: /tmpfilename: filebeat_data.logrotate_every_kb: 102400number_of_files: 7
input {kafka {# The topic name provided by the cls kafka protocol consumption console, such as XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX, can be copied from the consoletopics => "Your consumption topics"# Service address + port, public network port 9096, private network port 9095, example is for intranet consumption, fill in based on your actual situationbootstrap_servers => "kafkaconsumer-${region}.cls.tencentyun.com:9095"group_id => "your consumer group name"security_protocol => "SASL_PLAINTEXT"sasl_mechanism => "PLAIN"# The username is the log collection ID, such as ca5cXXXXdd2e-4ac0af12-92d4b677d2c6# The password is a string composed of the user's SecretId#SecretKey, such as AKID********************************#XXXXuXtymIXT0Lac. Be careful not to lose the #. Use sub-account keys. When the root account authorizes the sub-account, follow the principle of least privilege. Configure the action and resource in the sub-account access policy to the minimum range to fulfill the operations.sasl_jaas_config => "org.apache.kafka.common.security.plain.PlainLoginModule required username='${logsetID}' password='${SecretId}#${SecretKey}';"}}output {stdout { codec => json }}
a1.sources = source_kafkaa1.sinks = sink_locala1.channels = channel1# Configure Sourcea1.sources.source_kafka.type = org.apache.flume.source.kafka.KafkaSourcea1.sources.source_kafka.batchSize = 10a1.sources.source_kafka.batchDurationMillis = 200000# Service address + port, public network port 9096, private network port 9095, example is for intranet consumption, fill in based on your actual situationa1.sources.source_kafka.kafka.bootstrap.servers = kafkaconsumer-${region}.cls.tencentyun.com:9095# The topic name provided by the cls kafka protocol consumption console, such as XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX, can be copied from the consolea1.sources.source_kafka.kafka.topics = your consumption topics# Replace with your consumer group namea1.sources.source_kafka.kafka.consumer.group.id = your consumer group namea1.sources.source_kafka.kafka.consumer.auto.offset.reset = earliesta1.sources.source_kafka.kafka.consumer.security.protocol = SASL_PLAINTEXTa1.sources.source_kafka.kafka.consumer.sasl.mechanism = PLAIN# The username is the log collection ID, such as ca5cXXXXdd2e-4ac0af12-92d4b677d2c6# The password is a string composed of the user's SecretId#SecretKey, such as AKID********************************#XXXXuXtymIXT0Lac. Be careful not to lose the #. It is recommended to use sub-account keys. When the root account authorizes the sub-account, follow the principle of least privilege. Configure the action and resource in the sub-account access policy to the minimum range to fulfill the operations. Note that jaas.config ends with a semicolon; an error will be reported if not filled in.a1.sources.source_kafka.kafka.consumer.sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule required username="${logsetID}" password="${SecretId}#${SecretKey}";# Configure sinka1.sinks.sink_local.type = loggera1.channels.channel1.type = memorya1.channels.channel1.capacity = 1000a1.channels.channel1.transactionCapacity = 100# Bind source and sink to channela1.sources.source_kafka.channels = channel1a1.sinks.sink_local.channel = channel1
Last updated:2025-12-03 18:30:49
CREATE TABLE `nginx_source`( # Fields in the log`@metadata` STRING,`@timestamp` TIMESTAMP,`agent` STRING,`ecs` STRING,`host` STRING,`input` STRING,`log` STRING,`message` STRING,`partition_id` BIGINT METADATA FROM 'partition' VIRTUAL, -- kafka partition`ts` TIMESTAMP(3) METADATA FROM 'timestamp') WITH ('connector' = 'kafka',# cls kafka protocol consumption topic name provided by the console, such as XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX, can be copied from the console'topic' = 'Your consumption topics',# Service address + port, public network port 9096, private network port 9095, example is intranet consumption, fill in according to your actual situation'properties.bootstrap.servers' = 'kafkaconsumer-${region}.cls.tencentyun.com:9095',# Replace with your consumer group nameproperties.group.id' = 'Consumer group ID','scan.startup.mode' = 'earliest-offset','format' = 'json','json.fail-on-missing-field' = 'false','json.ignore-parse-errors' = 'true' ,# username is logset ID, for example ca5cXXXXdd2e-4ac0af12-92d4b677d2c6The password is a string composed of the user's SecretId#SecretKey, such as AKID********************************#XXXXuXtymIXT0Lac. Be careful not to lose the #. Use sub-account keys. When the root account authorizes the sub-account, follow the principle of least privilege. Configure the action and resource in the sub-account access policy to the minimum range to fulfill the operations. Note that jaas.config must end with a semicolon; an error will be reported if not filled in.'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username="${logsetID}" password="${SecretId}#${SecretKey}";','properties.security.protocol' = 'SASL_PLAINTEXT','properties.sasl.mechanism' = 'PLAIN');
<dependency><groupId>org.apache.flink</groupId><artifactId>flink-connector-kafka</artifactId><version>1.14.4</version></dependency>
CREATE TABLE `nginx_source`(#Fields in the log`@metadata` STRING,`@timestamp` TIMESTAMP,`agent` STRING,`ecs` STRING,`host` STRING,`input` STRING,`log` STRING,`message` STRING,# kafka partition`partition_id` BIGINT METADATA FROM 'partition' VIRTUAL,`ts` TIMESTAMP(3) METADATA FROM 'timestamp') WITH ('connector' = 'kafka',# cls kafka protocol consumption topic name provided by the console, such as XXXXXX-633a268c-XXXX-4a4c-XXXX-7a9a1a7baXXXX, can be copied from the console'topic' = 'Your consumption topics',# Service address + port, public network port 9096, private network port 9095, example is intranet consumption, fill in according to your actual situation'properties.bootstrap.servers' = 'kafkaconsumer-${region}.cls.tencentyun.com:9095',# Replace with your consumer group nameproperties.group.id' = 'Consumer group ID','scan.startup.mode' = 'earliest-offset','format' = 'json','json.fail-on-missing-field' = 'false','json.ignore-parse-errors' = 'true' ,# username is logset ID, for example ca5cXXXXdd2e-4ac0af12-92d4b677d2c6The password is a string composed of the user's SecretId#SecretKey, such as AKID********************************#XXXXuXtymIXT0Lac. Be careful not to lose the #. It is recommended to use sub-account keys. When the root account authorizes the sub-account, follow the principle of least privilege. Configure the action and resource in the sub-account access policy to the minimum range to fulfill the operations. Note that the jaas.config must end with a semicolon; an error will be reported if not filled in.'properties.sasl.jaas.config' = 'org.apache.kafka.common.security.plain.PlainLoginModule required username="${logsetID}" password="${SecretId}#${SecretKey}";','properties.security.protocol' = 'SASL_PLAINTEXT','properties.sasl.mechanism' = 'PLAIN');
select count(*) , host from nginx_source group by host;
Last updated:2025-12-03 18:30:49


pip install git+https://github.com/TencentCloud/tencentcloud-cls-sdk-python.git
log_group {source //Log source, which is usually the machine's IP address.filename //Log file namelogs {time //Log time, which is a Unix timestamp in microseconds.user_defined_log_kvs //User log fields}}
class SampleConsumer(ConsumerProcessorBase):last_check_time = 0def initialize(self, topic_id):self.topic_id = topic_iddef process(self, log_groups, offset_tracker):for log_group in log_groups:for log in log_group.logs:# Process a single row of data.item = dict()item['filename'] = log_group.filenameitem['source'] = log_group.sourceitem['time'] = log.timefor content in log.contents:item[content.key] = content.value# Subsequent data processing# put your business logic hereprint(json.dumps(item))# offset commitcurrent_time = time.time()if current_time - self.last_check_time > 3:try:self.last_check_time = current_timeoffset_tracker.save_offset(True)except Exception:import tracebacktraceback.print_exc()else:try:offset_tracker.save_offset(False)except Exception:import tracebacktraceback.print_exc()return None
Parameter | Description | Default Value | Value Range |
endpoint | - | Supported regions: ALL | |
access_key_id | - | - | |
access_key | - | - | |
region | Topic's region. For example, ap-beijing, ap-guangzhou, ap-shanghai. For more details, see Regions and Access Domains. | - | Supported regions: ALL |
logset_id | Logset ID. Only one logset is supported. | - | - |
topic_ids | Log topic ID. For multiple topics, use , to separate. | - | - |
consumer_group_name | Consumer Group Name | - | - |
internal | Private network: TRUE Public network: FALSE Note: | FALSE | TRUE/FALSE |
consumer_name | Consumer name. Within the same consumer group, consumer names must be unique. | - | A string consisting of 0-9, aA-zZ, '-', '_', '.'. |
heartbeat_interval | The interval of heartbeats. If consumers fail to report a heartbeat for two intervals, they will be considered offline. | 20 | 0-30 minutes |
data_fetch_interval | The interval of consumer data pulling. Cannot be less than 1 second. | 2 | - |
offset_start_time | The start time for data pulling. The string type of UNIX Timestamp , with second-level precision. For example, 1711607794. It can also be directly configured as "begin" and "end". begin: The earliest data within the log topic lifetime. end: The latest data within the log topic lifetime. | "end" | "begin"/"end"/UNIX Timestamp |
max_fetch_log_group_size | The data size for a consumer in a single pulling. Defaults to 2 M and up to 10 M. | 2097152 | 2M - 10M |
offset_end_time | The end time for data pulling. Supports a string-type UNIX Timestamp , with second-level precision. For example, 1711607794. Not filling this field represents continuous pulling. | - | - |
class App:def __init__(self):self.shutdown_flag = False# access endpointself.endpoint = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_ENDPOINT', '')# regionself.region = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_REGION', '')# secret idself.access_key_id = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_ACCESSID', '')# secret keyself.access_key = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_ACCESSKEY', '')# logset idself.logset_id = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_LOGSET_ID', '')# topic idsself.topic_ids = os.environ.get('TENCENTCLOUD_LOG_SAMPLE_TOPICS', '').split(',')# consumer group name,self.consumer_group = 'consumer-group-1'# consumer id, we recommend setting the consumer count equal to the log topic partition count.self.consumer_name1 = "consumer-group-1-A"assert self.endpoint and self.access_key_id and self.access_key and self.logset_id, ValueError("endpoint/access_id/access_key and ""logset_id cannot be empty")signal.signal(signal.SIGTERM, self.signal_handler)signal.signal(signal.SIGINT, self.signal_handler)def signal_handler(self, signum, frame):print(f"catch signal {signum},cleanup...")self.shutdown_flag = Truedef run(self):print("*** start to run consumer...")self.consume()# waiting for exit signalwhile not self.shutdown_flag:time.sleep(1)# shutdown consumerprint("*** stopping workers")self.consumer.shutdown()sys.exit(0)def consume(self):try:# consumer configoption1 = LogHubConfig(self.endpoint, self.access_key_id, self.access_key, self.region, self.logset_id, self.topic_ids, self.consumer_group,self.consumer_name1, heartbeat_interval=3, data_fetch_interval=1,offset_start_time='begin', max_fetch_log_group_size=1048576)# init consumerself.consumer = ConsumerWorker(SampleConsumer, consumer_option=option1)# start consumerprint("*** start to consume data...")self.consumer.start()except Exception as e:import tracebacktraceback.print_exc()raise e
Last updated:2024-08-27 17:58:40
Configuration Item | Description | Limit | Required |
Task Name | Name of the shipping task | No more than 128 characters. | Required |
Shipping object | Prometheus Thanos | Configure RemoteWrite of the shipping destination in advance. | - |
Access method | Private network: Choose a private network if the shipping destination is accessed through the private network. For example, Prometheus is deployed on a CVM instance in the same region as the CLS instance. Public network: Not supported currently. | The shipping destination and metric topic are in the same region. | Required |
Network (private network) | VPC network of the shipping destination | - | Required |
Network service type (private network) | CVM: The shipping destination is directly deployed on a CVM instance. Note: Choose CVM if you want to ship metric topics to Prometheus on the Tencent Cloud Observability Platform. CLB: The service address and port of the shipping destination are forwarded through Tencent Cloud CLB. | - | - |
Remote Write address | Example: http://192.168.2.17:9090/api/v1/prom/write | - | Required |
Authentication method | Authentication method for accessing the time series database via RemoteWrite: BASIC AUTH: Username and password are required. Note: For Prometheus on the Tencent Cloud Observability Platform, enter the Grafana page, and choose DataSource > Settings > Basic Auth Details on the left configuration pane to view the user and password. No authentication | - | Required |
Test connectivity | Test the network connectivity between CLS and the shipping destination. Note: The metric shipping configuration can be submitted only after the connectivity test passes. | - | Required |
Last updated:2024-01-20 17:59:35
Name | Description |
Alarm policy | It is the management unit for monitoring alarms. An alarm policy contains various information such as monitoring object, monitoring period, trigger condition, alarm frequency, and notification template. |
Monitoring object | A log topic can be used as the monitoring object, a query or analysis statement can be executed on the log topic, and then the query or analysis result can be checked. |
Trigger condition | The query and analysis result is checked, and if the trigger condition expression is true, an alarm will be triggered. |
Monitoring period | It is the policy execution period. A fixed period (such as every 5 minutes) and a fixed time (such as 12:00 every day) are supported. |
Alarm frequency | It is the alarm frequency after the trigger condition is met, which helps avoid frequent alarm notifications. |
Notification group | Supported notification channels include SMS, WeChat, phone calls, email, and webhook. |

Last updated:2024-01-20 17:59:36
status:error | select count(*) as ErrCount.domain:"aaa.com" | select avg(request_time) as Latency.$N.keyname is used to reference the query statement result. Here, $N indicates the Nth query statement in the current alarm policy, and keyname indicates the corresponding field name. For more information on the expression syntax, see Trigger Condition Expression.$1.ErrCount > 10. Here, $1 indicates the first query statement, and ErrCount indicates the ErrCount field in the result.$2.Latency > 5. Here, $2 indicates the first query statement, and Latency indicates the Latency field in the result.* | select avg(request_time) as Latency,domain group by domain order by Latency desc limit 5, and multiple results are returned:Latency | Domain |
12.56 | aaa.com |
9.45 | bbb.com |
7.23 | ccc.com |
5.21 | ddd.com |
4.78 | eee.com |
Period Configuration Method | Description | Example |
Fixed frequency | Monitoring tasks are performed at fixed intervalsInterval: 1–1,440 minutes. Granularity: Minute | Monitoring tasks are performed once every 5 minutes |
Fixed time | Monitoring tasks are performed once at fixed points in timeTime point range: 00:00–23:59. Granularity: Minute | Monitoring tasks are performed once at 02:00 every day |
Multi-dimensional Analysis Type | Description |
Related raw logs | Get the raw logs that meet the search condition of the query statement. The log field, quantity, and display form can be configured. For example, when an alarm is triggered by too many error logs, you can view the detailed logs in the alarm. |
Top 5 field values by occurrence and their percentages | For all the logs within the time range when the alarm is triggered, group them based on the specified field and get the top 5 field values and their percentages. For example, when an alarm is triggered by too many error logs, you can get the top 5 URLs and top 5 response status codes. |
Custom search and analysis | Execute the custom search and analysis statement for all the logs within the time range when the alarm is triggered. Example 1: `* |
Last updated:2024-01-20 17:59:35
Operator | Description | Example |
$N.keyname | Imports the query analysis result. N is the monitoring object number, keyname is the field name in the query analysis result (which must start with a letter and can contain letters, digits, and underscores. We recommend you use the AS syntax to set an alias for the result.) | $1.ErrCount |
+ | Addition operator | $1.ErrCount+$1.FatCount>10 |
- | Subtraction operator | $1.Count-$1.InfoCount>100 |
* | Multiplication operator | $1.RequestMilSec*1000>10 |
/ | Division operator | $1.RequestSec/1000>0.01 |
% | Modulo operator | $1.keyA%10==0 |
== | Comparison operator: equal to | $1.ErrCount==100 $1.level=="Error" |
> | Comparison operator: greater than | $1.ErrCount>100 |
< | Comparison operator: less than | $1.pv<100 |
>= | Comparison operator: greater than or equal to | $1.ErrCount>=100 |
<= | Comparison operator: less than or equal to | $1.pv<=100 |
!= | Comparison operator: not equal to | $1.level!="Info" |
() | Parentheses for controlling the operation priority | ($1.a+$1.b)/$1.c>100 |
&& | Logical operator: AND | $1.ErrCount>100 && $1.level=="Error" |
|| | Logical operator: OR | $1.ErrCount>100 || $1.level=="Error" |
$1.a+$1.b is 100, no alarms will be triggered; if the result is greater than or equal to 100, an alarm will be triggered.keyname in $N.keyname is the field name of the query analysis result. It must start with a letter and can contain letters, digits, and underscores, such as level:error | select count(*) AS errCount. errCount can be directly used as keyname in the trigger condition expression. If the field name contains special symbols, you need to enclose the imported variable with [], such as [$1.count(*)]. We recommend you use an AS analysis statement to set an alias for the result field name.$1.key1 imports the key1 field name in the query whose number is 1, and $2.key2 imports the key2 field name in the query whose number is 2.true. For example, if the expression is $1.a+$2.b>100, analysis 1 returns m results, and analysis 2 returns n results, then the expression will be calculated for m * n times, and calculation will stop when $1.a+$2.b>100 is true or after 1,000 times of calculation.Last updated:2024-01-20 17:59:36

Last updated:2024-01-20 17:59:36

Last updated:2024-01-20 17:59:35
Last updated:2024-01-20 17:59:36
Last updated:2025-11-19 19:39:42
Detailed log:{{.QueryLog[0][0]}}
Detailed log:{"content":{"body_bytes_sent":"33352","http_referer":"-","http_user_agent":"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.17 Safari/537.36","remote_addr":"201.80.83.199","remote_user":"-","request_method":"GET","request_uri":"/content/themes/test-com/images/header_about.jpg","status":"404","time_local":"01/Nov/2018:01:16:31"},"fileName":"/root/testLog/nginx.log","pkg_id":"285A243662909DE3-70A","source":"172.17.0.2","time":1653831150008,"topicId":"a54de372-ffe0-49ae-a12e-c340bb2b03f2"}
Variable | Configuration | Sample Variable Value | Description |
{{.UIN}} | Account ID | 100007xxx827 | - |
{{.Nickname}} | Account nickname | xx company | - |
{{.Region}} | Region | Guangzhou | - |
{{.Alarm}} | Alarm policy name | Too many NGINX error logs | - |
{{.AlarmID}} | Alarm policy ID | notice-3abd7ad6-15b7-4168-xxxx-52e5b961a561 | - |
{{.ExecuteQuery}} | Executed Statement | ["status:>=400 | select count(*) as errorLogCount","status:>=400 | select count(*) as errorLogCount,request_uri group by request_uri order by count(*) desc"] | It is an array. {{.ExecuteQuery[0]}} indicates the detailed log of the first query statement, {{.ExecuteQuery[1]}} the second, and so on. |
{{.Condition}} | Trigger Condition | $1.errorLogCount > 1 | - |
{{.HappenThreshold}} | Number of times the trigger condition needs to be constantly met before an alarm is triggered | 1 | - |
{{.AlertThreshold}} | Alarm interval | 15 | Unit: Minute |
{{.Topic}} | Log topic name | nginxLog | - |
{{.TopicId}} | Log topic ID | a54de372-ffe0-49ae-xxxx-c340bb2b03f2 | - |
{{.StartTime}} | Time when the alarm is triggered for the first time | 2022-05-28 18:56:37 | Time zone: Asia/Shanghai |
{{.StartTimeUnix}} | Timestamp when the alarm is triggered for the first time | 1653735397099 | UNIX timestamp in milliseconds |
{{.NotifyTime}} | Time of this alarm notification | 2022-05-28 19:41:37 | Time zone: Asia/Shanghai |
{{.NotifyTimeUnix}} | Timestamp of this alarm notification | 1653738097099 | UNIX timestamp in milliseconds |
{{.NotifyType}} | Alarm notification type | 1 | Valid values: `1` (alarmed), `2` (resolved) |
{{.ConsecutiveAlertNums}} | Number of consecutive alarms | 2 | - |
{{.Duration}} | Alarm duration | 0 | Unit: Minute |
{{.TriggerParams}} | Alarm trigger parameter | $1.errorLogCount=5; | - |
{{.ConditionGroup}} | Group information when the alarm is triggered | {"$1.AppName":"userManageService"} | This is valid only when triggering by group is enabled in the alarm policy. |
{{.DetailUrl}} | URL of the alarm details page | https://alarm.cls.tencentcs.com/MDv2xxJh | No login is required. |
{{.QueryUrl}} | URL of the search and analysis statement in the first query statement | https://alarm.cls.tencentcs.com/T0pkxxMA | - |
{{.Message}} | Notification content | - | It indicates the **notification content** entered in the alarm policy. |
{{.QueryResult}} | Execution result of the query statement | - | |
{{.QueryLog}} | Detailed log matching the search condition of the query statement | - | |
{{.AnalysisResult}} | Multi-dimensional analysis result | This variable is valid only when an alarm is triggered and becomes invalid when the alarm is cleared. |
{{.QueryResult[0]}} indicates the execution result of the first query statement, {{.QueryResult[1]}} the second, and so on.The first query statement: status:>=400 | select count(*) as errorLogCountThe second query statement: status:>=400 | select count(*) as errorLogCount,request_uri group by request_uri order by count(*) desc
[[{"errorLogCount": 7}],[{"errorLogCount": 3,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 3,"request_uri": "/feed"}, {"errorLogCount": 1,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}]]
{{.QueryLog[0]}} indicates the detailed log of the first query statement, {{.QueryLog[1]}} the second, and so on. Up to last ten detailed logs can be contained in each query statement.[[{"content": {"__TAG__": {"pod": "nginxPod","cluster": "testCluster"},"body_bytes_sent": "32847","http_referer": "-","http_user_agent": "Opera/9.80 (Windows NT 6.1; U; en-US) Presto/2.7.62 Version/11.01","remote_addr": "105.86.148.186","remote_user": "-","request_method": "GET","request_uri": "/apple-touch-icon-144x144.png","status": "404","time_local": "01/Nov/2018:00:55:14"},"fileName": "/root/testLog/nginx.log","pkg_id": "285A243662909DE3-5CD","source": "172.17.0.2","time": 1653739000013,"topicId": "a54de372-ffe0-49ae-a12e-c340bb2b03f2"}, {"content": {"__TAG__": {"pod": "nginxPod","cluster": "testCluster"},"body_bytes_sent": "33496","http_referer": "-","http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36","remote_addr": "222.18.168.242","remote_user": "-","request_method": "GET","request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html","status": "404","time_local": "01/Nov/2018:00:54:37"},"fileName": "/root/testLog/nginx.log","pkg_id": "285A243662909DE3-5C8","source": "172.17.0.2","time": 1653738975008,"topicId": "a54de372-ffe0-49ae-a12e-c340bb2b03f2"}]]
key being the multi-dimensional analysis name and the value being the multi-dimensional analysis result. This variable is valid only when an alarm is triggered (that is, {{.NotifyType}}=1) and becomes invalid when the alarm is cleared (that is, {{.NotifyType}}=2).Name: Top URLType: Top 5 field values by occurrence and their percentagesField: request_uriName: Error log URL distributionType: Custom search and analysisAnalysis statement: status:>=400 | select count(*) as errorLogCount,request_uri group by request_uri order by count(*) descName: Detailed error logType: Custom search and analysisAnalysis statement: status:>=400
{"Top URL": [{"count": 77,"ratio": 0.45294117647058824,"value": "/"}, {"count": 20,"ratio": 0.11764705882352941,"value": "/favicon.ico"}, {"count": 7,"ratio": 0.041176470588235294,"value": "/blog/feed"}, {"count": 5,"ratio": 0.029411764705882353,"value": "/test-tile-service"}, {"count": 3,"ratio": 0.01764705882352941,"value": "/android-chrome-192x192.png"}],"Detailed error log": [{"content": {"__TAG__": {"pod": "nginxPod","cluster": "testCluster"},"body_bytes_sent": "32847","http_referer": "-","http_user_agent": "Opera/9.80 (Windows NT 6.1; U; en-US) Presto/2.7.62 Version/11.01","remote_addr": "105.86.148.186","remote_user": "-","request_method": "GET","request_uri": "/apple-touch-icon-144x144.png","status": "404","time_local": "01/Nov/2018:00:55:14"},"fileName": "/root/testLog/nginx.log","pkg_id": "285A243662909DE3-5CD","source": "172.17.0.2","time": 1653739000013,"topicId": "a54de372-ffe0-49ae-a12e-c340bb2b03f2"}, {"content": {"__TAG__": {"pod": "nginxPod","cluster": "testCluster"},"body_bytes_sent": "33496","http_referer": "-","http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36","remote_addr": "222.18.168.242","remote_user": "-","request_method": "GET","request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html","status": "404","time_local": "01/Nov/2018:00:54:37"},"fileName": "/root/testLog/nginx.log","pkg_id": "285A243662909DE3-5C8","source": "172.17.0.2","time": 1653738975008,"topicId": "a54de372-ffe0-49ae-a12e-c340bb2b03f2"}],"Error log URL distribution": [{"errorLogCount": 3,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 3,"request_uri": "/feed"}, {"errorLogCount": 1,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}]}
{{ }}, and text outside {{ }} won't be processed.{{.variable[x]}} or {{index .variable x}}{{.variable.childNodeName}} or {{index .variable "childNodeName"}}
{{.variable[x]}} (equivalent to {{index .variable x}}) is used to extract array elements by subscript. Here, x is an integer greater than or equal to 0.{{.variable.childNodeKey}} (equivalent to {{index .variable "childNodeName"}}) is used to extract sub-object values (value) by sub-object name (key).{{index .variable "childNodeName"}}, such as {{index .AnalysisResult "Top URL"}}.{{.QueryResult}} variable values are:[[{"errorLogCount": 7 // Extract the value}],[{"errorLogCount": 3,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 3,"request_uri": "/feed"}, {"errorLogCount": 1,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}]]
errorLogCount value of the first array through the following expression:{{.QueryResult[0][0].errorLogCount}}
7
{{range .variable}}Custom content{{.childNode1}}custom content{{.childNode2}}...{{end}}
{{range $key,$value := .variable}}Custom content{{$key}}custom content{{$value}}...{{end}}
{{.QueryResult}} variable values are:[[{"errorLogCount": 7}],[{"errorLogCount": 3,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 3,"request_uri": "/feed"}, {"errorLogCount": 1,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}]]
errorLogCount value of each request_uri in the second array through the following expression:{{range .QueryResult[1]}}* {{.request_uri}} error log quantity: {{.errorLogCount}}{{end}}
* /apple-touch-icon-144x144.png error log quantity: 3* /feed error log quantity: 3* /opt/node_apps/test-v5/app/themes/basic/public/static/404.html error log quantity: 1
{{if boolen}}xxx{{end}}
{{if boolen}}xxx{{else}}xxx{{end}}
{{if boolen}}xxx{{else if boolen}}xxx{{end}}
eq arg1 arg2: When arg1 == arg2, the value is `true`.ne arg1 arg2: When arg1 != arg2, the value is `true`.lt arg1 arg2: When arg1 < arg2, the value is `true`.le arg1 arg2: When arg1 <= arg2, the value is `true`.gt arg1 arg2: When arg1 > arg2, the value is `true`.ge arg1 arg2: When arg1 >= arg2, the value is `true`.
{{.QueryResult}} variable values are:[[{"errorLogCount": 7}],[{"errorLogCount": 3,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 3,"request_uri": "/feed"}, {"errorLogCount": 1,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}]]
request_uri that is ≥ 2 and ≤ 100 and its errorLogCount value in the second array through the following expression:{{range .QueryResult[1]}}{{if and (ge .errorLogCount 2) (le .errorLogCount 100)}}* {{.request_uri}} error log quantity: {{.errorLogCount}}{{end}}{{end}}
* /apple-touch-icon-144x144.png error log quantity: 3* /feed error log quantity: 3
if to check whether the field value exists. If the field value is an empty string or does not exist, it is equivalent to false. For example:{{if .QueryLog[0][0].apple}}apple exist, value is : {{.QueryLog[0][0].apple}}{{else}}apple is not exist{{end}}
{{- xxx}} or {{xxx -}}
- at the beginning or end in {{ }} to remove blank areas.{{- range .QueryResult[1]}}{{- if and (ge .errorLogCount 2) (le .errorLogCount 100)}}* {{.request_uri}} error log quantity: {{.errorLogCount}}{{- end}}{{- end}}
* /apple-touch-icon-144x144.png error log quantity: 3* /feed error log quantity: 3
{{escape .variable}}
{{.ExecuteQuery[0]}} variable value is status:>=400 | select count(*) as "error log quantity".
If escaping is not used, the request content in the custom webhook configuration will be:{"Query":"{{.ExecuteQuery[0]}}"}
{"Query":"status:>=400 | select count(*) as "error log quantity""}
{"Query":"{{escape .ExecuteQuery[0]}}"}
{"Query":"status:>=400 | select count(*) as \"error log quantity\""}
{{substr .variable start}} or {{substr .variable start length}}
{{.QueryLog[0][0].fileName}} variable value is:/root/testLog/nginx.log
{{substr .QueryLog[0][0].fileName 6 7 }}
testLog
{{extract .variable "startstring" ["endstring"]}}
{{.QueryLog[0][0].fileName}} variable value is:/root/testLog/nginx.log
/root/ and /nginx through the following expression:{{extract .QueryLog[0][0].fileName "/root/" "/nginx"}}
testLog
{{containstr .variable "searchstring"}}
{{.QueryLog[0][0].fileName}} variable value is:/root/testLog/nginx.log
/root/ and /nginx through the following expression:{{if containstr .QueryLog[0][0].fileName "test"}}Test log{{else}}Non-test log{{end}}
Test log
{{fromUnixTime .variable}} or {{fromUnixTime .variable "timezone"}}
{{.QueryLog[0][0].time}} variable value is:1653893435008
{{fromUnixTime .QueryLog[0][0].time}}{{fromUnixTime .QueryLog[0][0].time "Asia/Shanghai"}}{{fromUnixTime .QueryLog[0][0].time "Asia/Tokyo"}}
2022-05-30 14:50:35.008 +0800 CST2022-05-30 14:50:35.008 +0800 CST2022-05-30 15:50:35.008 +0900 JST
{{concat .variable1 .variable2 ...}}
{{concat .Region .Alarm}}
Guangzhou alarmTest
{{base64_encode .variable}}{{base64_decode .variable}}{{base64url_encode .variable}}{{base64url_decode .variable}}{{url_encode .variable}}{{url_decode .variable}}
{{base64_encode "test"}}{{base64_decode "dGVzdOa1i+ivlQ=="}}{{base64url_encode "test"}}{{base64url_decode "dGVzdOa1i-ivlQ=="}}{{url_encode "https://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing"}}{{url_decode "https%3A%2F%2Fconsole.cloud.tencent.com%3A80%2Fcls%3Fregion%3Dap-chongqing"}}
dGVzdOa1i+ivlQ==testdGVzdOa1i-ivlQ==testhttps%3A%2F%2Fconsole.cloud.tencent.com%3A80%2Fcls%3Fregion%3Dap-chongqinghttps://console.intl.cloud.tencent.com:80/cls?region=ap-chongqing
{{md5 .variable}}{{md5 .variable | base64_encode}}{{md5 .variable | base64url_encode}}{{sha1 .variable}}{{sha1 .variable | base64_encode}}{{sha1 .variable | base64url_encode}}{{sha256 .variable}}{{sha256 .variable | base64_encode}}{{sha256 .variable | base64url_encode}}{{sha512 .variable}}{{sha512 .variable | base64_encode}}{{sha512 .variable | base64url_encode}}
{{md5 "test"}}{{md5 "test" | base64_encode}}{{md5 "test" | base64url_encode}}{{sha1 "test"}}{{sha1 "test" | base64_encode}}{{sha1 "test" | base64url_encode}}{{sha256 "test"}}{{sha256 "test" | base64_encode}}{{sha256 "test" | base64url_encode}}{{sha512 "test"}}{{sha512 "test" | base64_encode}}{{sha512 "test" | base64url_encode}}
098F6BCD4621D373CADE4E832627B4F6CY9rzUYh03PK3k6DJie09g==CY9rzUYh03PK3k6DJie09g==A94A8FE5CCB19BA61C4C0873D391E987982FBBD3qUqP5cyxm6YcTAhz05Hph5gvu9M=qUqP5cyxm6YcTAhz05Hph5gvu9M=9F86D081884C7D659A2FEAA0C55AD015A3BF4F1B2B0B822CD15D6C15B0F00A08n4bQgYhMfWWaL+qgxVrQFaO/TxsrC4Is0V1sFbDwCgg=n4bQgYhMfWWaL-qgxVrQFaO_TxsrC4Is0V1sFbDwCgg=EE26B0DD4AF7E749AA1A8EE3C10AE9923F618980772E473F8819A5D4940E0DB27AC185F8A0E1D5F84F88BC887FD67B143732C304CC5FA9AD8E6F57F50028A8FF7iaw3Ur350mqGo7jwQrpkj9hiYB3Lkc/iBml1JQODbJ6wYX4oOHV+E+IvIh/1nsUNzLDBMxfqa2Ob1f1ACio/w==7iaw3Ur350mqGo7jwQrpkj9hiYB3Lkc_iBml1JQODbJ6wYX4oOHV-E-IvIh_1nsUNzLDBMxfqa2Ob1f1ACio_w==
{{hmac_md5 .variable "Secretkey"}}{{hmac_md5 .variable "Secretkey" | base64_encode}}{{hmac_md5 .variable "Secretkey" | base64url_encode}}{{hmac_sha1 .variable "Secretkey"}}{{hmac_sha1 .variable "Secretkey" | base64_encode}}{{hmac_sha1 .variable "Secretkey" | base64url_encode}}{{hmac_sha256 .variable "Secretkey"}}{{hmac_sha256 .variable "Secretkey" | base64_encode}}{{hmac_sha256 .variable "Secretkey" | base64url_encode}}{{hmac_sha512 .variable "Secretkey"}}{{hmac_sha512 .variable "Secretkey" | base64_encode}}{{hmac_sha512 .variable "Secretkey" | base64url_encode}}
Secretkey is the key in the HMAC encryption algorithm and can be modified as needed.{{hmac_md5 "test" "Secretkey"}}{{hmac_md5 "test" "Secretkey" | base64_encode}}{{hmac_md5 "test" "Secretkey" | base64url_encode}}{{hmac_sha1 "test" "Secretkey"}}{{hmac_sha1 "test" "Secretkey" | base64_encode}}{{hmac_sha1 "test" "Secretkey" | base64url_encode}}{{hmac_sha256 "test" "Secretkey"}}{{hmac_sha256 "test" "Secretkey" | base64_encode}}{{hmac_sha256 "test" "Secretkey" | base64url_encode}}{{hmac_sha512 "test" "Secretkey"}}{{hmac_sha512 "test" "Secretkey" | base64_encode}}{{hmac_sha512 "test" "Secretkey" | base64url_encode}}
E7B946D930658699AA668601E33E87CE57lG2TBlhpmqZoYB4z6Hzg==57lG2TBlhpmqZoYB4z6Hzg==2AB64F124D932F5033EAC7AF392AC5CC4D52F503KrZPEk2TL1Az6sevOSrFzE1S9QM=KrZPEk2TL1Az6sevOSrFzE1S9QM=FC49EBC05209B1359773D87C216BA85BCE0163FDE459EA37AB603EC9D8445D23/EnrwFIJsTWXc9h8IWuoW84BY/3kWeo3q2A+ydhEXSM=_EnrwFIJsTWXc9h8IWuoW84BY_3kWeo3q2A-ydhEXSM=D18DF3D943F74769A8B66E43D7EF03639BB6B8B8A2EBC9976170DC58EEE58BE98478F3183E4B5AA3481DE12026AAE3843F8213B39D639EAC6EE93734EA667BC50Y3z2UP3R2motm5D1+8DY5u2uLii68mXYXDcWO7li+mEePMYPktao0gd4SAmquOEP4ITs51jnqxu6Tc06mZ7xQ==0Y3z2UP3R2motm5D1-8DY5u2uLii68mXYXDcWO7li-mEePMYPktao0gd4SAmquOEP4ITs51jnqxu6Tc06mZ7xQ==
key:value. There is a key in each row, and CLS preset fields and metadata fields are not included.{{range $key,$value := .QueryLog[0][0].content}}{{if not (containstr $key "__TAG__")}}{{- $key}}:{{$value}}{{- end}}{{- end}}
.QueryLog[0][0] indicates the last detailed log that meets the search condition of the first query statement in the alarm policy. Its value is:{"content": {"__TAG__": {"a": "b12fgfe","c": "fgerhcdhgj"},"body_bytes_sent": "33704","http_referer": "-","http_user_agent": "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.3319.102 Safari/537.36","remote_addr": "247.0.249.191","remote_user": "-","request_method": "GET","request_uri": "/products/hadoop)","status": "404","time_local": "01/Nov/2018:07:54:08"},"fileName": "/root/testLog/nginx.log","pkg_id": "285A243662909DE3-210B","source": "172.17.0.2","time": 1653908859008,"topicId": "a54de372-ffe0-49ae-a12e-c340bb2b03f2"}
remote_addr:247.0.249.191time_local:01/Nov/2018:07:54:08http_user_agent:Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.3319.102 Safari/537.36remote_user:-http_referer:-body_bytes_sent:33704request_method:GETrequest_uri:/products/hadoop)status:404
status:>=400 | select count(*) as errorLogCount,request_uri group by request_uri order by count(*) desc.
The trigger condition is $1.errorLogCount > 10.{{range .QueryResult[0]}}{{- if gt .errorLogCount 10}}{{.request_uri}} error log quantity: {{.errorLogCount}}{{- end}}{{- end}}
.QueryResult[0] indicates the execution result of the first query statement in the alarm policy. Its value is:[{"errorLogCount": 161,"request_uri": "/apple-touch-icon-144x144.png"}, {"errorLogCount": 86,"request_uri": "/opt/node_apps/test-v5/app/themes/basic/public/static/404.html"}, {"errorLogCount": 33,"request_uri": "/feed"}, {"errorLogCount": 26,"request_uri": "/wp-login.php"}, {"errorLogCount": 10,"request_uri": "/safari-pinned-tab.svg"}, {"errorLogCount": 7,"request_uri": "/mstile-144x144.png"}, {"errorLogCount": 4,"request_uri": "/atom.xml"}, {"errorLogCount": 3,"request_uri": "/content/plugins/prettify-gc-syntax-highlighter/launch.js?ver=3.5.2?ver=3.5.2"}]
/apple-touch-icon-144x144.png error log quantity: 161/opt/node_apps/elastic-v5/app/themes/basic/public/static/404.html error log quantity: 86/feed error log quantity: 33/wp-login.php error log quantity: 26
Last updated:2024-01-20 17:59:35
Metric | Description |
Total Alarm Policy Executions | Number of alarm policies executed over the statistical time range |
Alarm Policy Executions | Number of times the query and analysis statement in the alarm policy is executed over the statistical time range |
Failed Alarm Policy Executions | Number of alarm policy execution failures over the statistical time range. Execution failures include AlarmConfigNotFound, QuerySyntaxError, QueryError, QueryResultParseError, ConditionSyntaxError, ConditionEvaluateError, and ConditionValueTypeError. For more information, please see Execution Result Status Codes |
Times of Trigger Conditions Met | Number of times the query and analysis statement in the alarm policy is executed successfully and the result returned meets the trigger condition over the statistical time range |
Notifications Triggered by the Alarm Policy | Number of times notifications are triggered by the execution of the alarm policy over the statistical time range |
Top 10 Alarm Policies by Number of Notifications | Top 10 alarm policies in terms of the number of times notifications are triggered over the statistical time range |
Execution Result | Description |
AlarmConfigNotFound | The alarm policy configuration is missing. Please check whether the alarm policy and monitoring object have been configured correctly. |
QuerySyntaxError | The analysis statement of the monitoring object has a syntax error. Please check whether the statement is correct. For more information on the syntax, please see Overview. |
QueryError | The analysis statement is not executed properly. Please check the analysis statement and the index configuration of the log topic. |
QueryResultParseError | Failed to parse the analysis result format. |
ConditionSyntaxError | The trigger condition expression has a syntax error. Please check the syntax format of the expression. |
ConditionEvaluateError | An error occurred while computing the trigger condition. Please check whether the imported variable exists in the analysis result |
ConditionValueTypeError | The evaluation result of the trigger condition is not a Boolean value. Please check whether the trigger condition expression is correct. |
EvalTimesLimited | The trigger condition hasn't been met even after it has been computed more than 1,000 times. |
QueryResultUnmatch | The analysis result for the current monitoring period doesn't meet the alarm trigger condition. |
UnreachedThreshold | The alarm trigger condition is met, but the alarm convergence threshold has not been reached, so no alarm notification is sent. HappenThreshold Unreached: the period convergence condition is not met; for example, an alarm is triggered only if the trigger condition is met in 5 consecutive monitoring periods. AlertThreshold Unreached: the alarm interval condition is not met; for example, an alarm will be triggered once every 15 minutes. |
TemplateUnmatched | The alarm configuration information doesn't match the notification template. Specific causes include: TypeUnmatched: the alarm notification type (alarm triggered or alarm cleared) doesn't match the notification template, so no alarm notification is sent. TimeUnmatched: the alarm notification time period doesn't match the notification template, so no alarm notification is sent. SendFail: the notification failed to be sent. |
Matched | The alarm condition is met, and the alarm notification is sent successfully. |
Alarm State | Description |
Uncleared | The system continuously meets the trigger condition and triggers an alarm. |
Cleared | The current monitoring period does not meet the trigger condition. |
Invalid | The alarm policy is deleted or modified. |
Last updated:2024-06-06 17:05:09
Field | Comparison Method | Comparison Value | Example of API Parameter (Rule) |
Alarm Severity Level | It belongs to In.It doesn't belong to NotIn. | Reminder 1Warning 0Emergency 2(Multiple values supported) | Example meaning: The alarm severity is not reminder.
|
Alarm Policy AlarmID | It belongs to In.It doesn't belong to NotIn. | Alarm policy (Multiple policies supported) | Example meaning: Alarm Policy belongs to alarm-4ddebe88-xxxx-4d8c-acce-6a8613e24cbf, alarm-12a93b68-xxxx-4a42-bcf3-843690fc0793.
|
Alarm Policy Name AlarmName | Regular expression match =~Regular expression mismatch ! =~ | Alarm policy name (Regular expression) | Example meaning: The regular expression of alarm policy name does not match test.
|
Alarm Classification Field Label.(The specific classified field name is required to be specified.) | It belongs to In.It doesn't belong to NotIn. | Classified field value (Multiple values supported) | Example meaning: Alarm Classification Field key1 belongs to value1, value2.
|
| Regular expression match =~Regular expression mismatch ! =~ | Classified field value (Regular expression) | Example meaning: Alarm Classification Field key2 regular expression matches value3.
|
Group Trigger Field Group(The specific group field name is required to be specified.) | It belongs to In.It doesn't belong to NotIn. | Classified field value (Multiple values supported) | Example meaning: Group Trigger Field $1.key1 belongs to value1, value2.
|
| Regular expression match =~Regular expression mismatch ! =~ | Classified field value (Regular expression) | Example meaning: Group Trigger Field $1.key2 regular expression does not match value3.
|
Monitor Object MonitorObject. | It belongs to In.It doesn't belong to NotIn. | Log topic and metric topic (Multiple topics supported) | Example meaning: The monitor object belongs to log topic aa6b76f1-9040-47bc-xxxx-cdd67000c0ce and metric topic 554588f7-3481-4c15-xxxx-80363b3a1c8e.Note: BizType=0 represents the log topic; BizType=1 represents the metric topic.
|
{"Value": "AND", //Meet the following rules at the same time, and it must be AND"Type": "Operation","Children": [{"Type": "Condition", //The first rule"Value": "Level", //Alarm severity"Children": [{"Value": "In", //Belongs to"Type": "Compare"}, {"Value": "[1]", //Reminder"Type": "Value"}]}, {"Type": "Condition", //The second rule"Value": "AlarmID", //Alarm policy"Children": [{"Value": "In", //Belongs to"Type": "Compare"}, {"Value": "[\"alarm-57105ec6-xxxx-xxxx-xxxx-892f3b8d143a\"]", // ID corresponding to the demo alarm policy"Type": "Value"}]}]}
Last updated:2025-04-07 14:50:44
Last updated:2025-04-07 14:50:44















Last updated:2025-04-07 14:50:44

















Last updated:2025-04-07 14:50:44















Last updated:2025-04-07 14:50:44












Last updated:2025-04-07 14:50:45













/opt/logs/*.log, you can specify the log with a directory of /opt/logs and a file name of *.log.
/opt/logs/*.log, you can specify the with a log directory of /opt/logs and a file name of *.log.

Field Name | Description |
container_id | Container ID to which logs belong. |
container_name | Container name to which logs belong. |
image_name | Image name/IP of the container to which logs belong. |
namespace | The namespace of the pod to which logs belong. |
pod_uid | UID of the pod to which logs belong. |
pod_name | Name of the pod to which logs belong. |
pod_ip | IP address of the pod to which logs belong. |
pod_lable_{label name} | Label of the pod to which logs belong. For example, if a pod has two labels: app=nginx and env=prod, then the uploaded log will be accompanied by two metadata entries: pod_label_app:nginx and pod_label_env:prod. |

Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
__CONTENT__:Tue Jan 22 12:08:15 CST 2019 Installed: libjpeg-turbo-static-1.2.90-6.el7.x86_64
2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:java.lang.NullPointerExceptionat com.test.logging.FooFactory.createFoo(FooFactory.java:15)at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}\s.+
__CONTENT__:2019-12-15 17:13:06,043 [main] ERROR com.test.logging.FooFactory:\njava.lang.NullPointerException\n at com.test.logging.FooFactory.createFoo(FooFactory.java:15)\n at com.test.logging.FooFactoryTest.test(FooFactoryTest.java:11)
10.135.46.111 - - [22/Jan/2019:19:19:30 +0800] "GET /my/course/1 HTTP/1.1" 127.0.0.1 200 782 9703 "http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNum" "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0" 0.354 0.354
(\S+)[^\[]+(\[[^:]+:\d+:\d+:\d+\s\S+)\s"(\w+)\s(\S+)\s([^"]+)"\s(\S+)\s(\d+)\s(\d+)\s(\d+)\s"([^"]+)"\s"([^"]+)"\s+(\S+)\s(\S+).*
body_bytes_sent: 9703http_host: 127.0.0.1http_protocol: HTTP/1.1http_referer: http://127.0.0.1/course/explore?filter%5Btype%5D=all&filter%5Bprice%5D=all&filter%5BcurrentLevelId%5D=all&orderBy=studentNumhttp_user_agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0remote_addr: 10.135.46.111request_length: 782request_method: GETrequest_time: 0.354request_url: /my/course/1status: 200time_local: [22/Jan/2019:19:19:30 +0800]upstream_response_time: 0.354
[2018-10-01T10:30:01,000] [INFO] java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
\[\d+-\d+-\w+:\d+:\d+,\d+]\s\[\w+]\s.*
\[(\d+-\d+-\w+:\d+:\d+,\d+)\]\s\[(\w+)\]\s(.*)
() capture group, you can customize the key name of each group as follows:time: 2018-10-01T10:30:01,000`level: INFO`msg:java.lang.Exception: exception happenedat TestPrintStackTrace.f(TestPrintStackTrace.java:3)at TestPrintStackTrace.g(TestPrintStackTrace.java:7)at TestPrintStackTrace.main(TestPrintStackTrace.java:16)
{"remote_ip":"10.135.46.111","time_local":"22/Jan/2019:19:19:34 +0800","body_sent":23,"responsetime":0.232,"upstreamtime":"0.232","upstreamhost":"unix:/tmp/php-cgi.sock","http_host":"127.0.0.1","method":"POST","url":"/event/dispatch","request":"POST /event/dispatch HTTP/1.1","xff":"-","referer":"http://127.0.0.1/my/course/4","agent":"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0","response_code":"200"}
agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:64.0) Gecko/20100101 Firefox/64.0body_sent: 23http_host: 127.0.0.1method: POSTreferer: http://127.0.0.1/my/course/4remote_ip: 10.135.46.111request: POST /event/dispatch HTTP/1.1response_code: 200responsetime: 0.232time_local: 22/Jan/2019:19:19:34 +0800upstreamhost: unix:/tmp/php-cgi.sockupstreamtime: 0.232url: /event/dispatchxff: -
10.20.20.10 - ::: [Tue Jan 22 14:49:45 CST 2019 +0800] ::: GET /online/sample HTTP/1.1 ::: 127.0.0.1 ::: 200 ::: 647 ::: 35 ::: http://127.0.0.1/
:::, this log will be divided into eight fields and each of these fields will be assigned a unique key, as shown below:IP: 10.20.20.10 -bytes: 35host: 127.0.0.1length: 647referer: http://127.0.0.1/request: GET /online/sample HTTP/1.1status: 200time: [Tue Jan 22 14:49:45 CST 2019 +0800]
1571394459, http://127.0.0.1/my/course/4|10.135.46.111|200, status:DEAD,
{"processors": [{"type": "processor_split_delimiter","detail": {"Delimiter": ",","ExtractKeys": [ "time", "msg1","msg2"]},"processors": [{"type": "processor_timeformat","detail": {"KeepSource": true,"TimeFormat": "%s","SourceKey": "time"}},{"type": "processor_split_delimiter","detail": {"KeepSource": false,"Delimiter": "|","SourceKey": "msg1","ExtractKeys": [ "submsg1","submsg2","submsg3"]},"processors": []},{"type": "processor_split_key_value","detail": {"KeepSource": false,"Delimiter": ":","SourceKey": "msg2"}}]}]}
time: 1571394459submsg1: http://127.0.0.1/my/course/4submsg2: 10.135.46.111submsg3: 200status: DEAD
ErrorCode = 404 only. You can enable the filter feature and configure rules as needed.














Last updated:2024-01-20 17:55:44
loglistener and start LogListener by running the following script:cd loglistener/tools; ./start.sh
loglistener and stop LogListener by running the following script:cd loglistener/tools; ./stop.sh
loglistener and check the status of the LogListener processes by running the following command:cd loglistener/tools; ./p.sh

bin/loglistenerm -d #Daemon processbin/loglistener --conf=etc/loglistener.conf #Main processbin/loglisteneru -u --conf=etc/loglistener.conf #Update process
loglistener and uninstall LogListener by running the following command:cd loglistener/tools; ./uninstall.sh
loglistener and check the heartbeat and configuration of LogListener by running the following command:cd loglistener/tools; ./check.sh
Last updated:2024-01-20 17:55:44
cd loglistener/tools && ./check.sh

telnet <region>.cls.myqcloud.com 80
<region> is the abbreviation for the region where CLS resides. For more information on regions, see Available Regions.
The following code appears upon normal network connection. Otherwise, connection fails. Check the network and ensure normal connection.

cd loglistener/tools && ./p.sh
bin/loglistenerm -d #Daemon processbin/loglistener --conf=etc/loglistener.conf #Main processbin/loglisteneru -u --conf=etc/loglistener.conf #Update process
cd loglistener/tools && ./start.sh
cd loglistener/etc && cat loglistener.conf
