tencent cloud

Row Processing Functions
Last updated:2025-12-05 11:26:34
Row Processing Functions
Last updated: 2025-12-05 11:26:34

Overview

Row processing functions process log rows, such as filtering, distributing, and splitting log rows.



Function log_output

Function definition

This function is used to output a row of log to a specified log topic. It can be used independently or together with branch conditions.

Syntax description

log_output(Alias). The alias is defined when the processing task is configured.

Field description

Parameter
Description
Type
Required
Default Value
Value Range
alias
Log topic alias
string
Yes
-
-

Sample

Distribute the log to 3 different log topics according to the values (waring, info, and error) of the loglevel field.
Raw log:
[
{
"loglevel": "warning"
},
{
"loglevel": "info"
},
{
"loglevel": "error"
}
]
Processing rule:
// The `loglevel` field has 3 values (`waring`, `info`, and `error`) and therefore the log is distributed to 3 different log topics accordingly.
t_switch(regex_match(v("loglevel"),regex="info"),log_output("info_log"),regex_match(v("loglevel"),regex="warning"),log_output("warning_log"),regex_match(v("loglevel"),regex="error"),log_output("error_log"))

Function log_auto_output

Function definition

Output logs to dynamic target topics. For example, if you need to dynamically create multiple target log topics based on the value of the log field "pd" and distribute the corresponding logs to the target log topics. Assuming the value of pd is "CLB", "Ckafka", "COS", "CDN", this function will dynamically create target log topics named "CLB", "Ckafka", "COS", "CDN" and write the related logs into the corresponding topics. Meanwhile, you can configure the index type and storage period for these newly created topics.

Syntax description

log_auto_output(topic_name="", logset_name="", index_options="", period=3,storage_type=" ",hot_period=0)

Field description

Parameter
Description
Type
Required
Default Value
Parameter Description
topic_name
Log Topic Name
string
y
-
The parameter topic_name contains "|". "|" will be removed in the generated topic name;
The parameter topic_name exceeds 250 characters. The generated log topic name will only have the first 250 characters. Exceeding characters will be truncated.
logset_name
Logset Name
string
y
-
-
index_options
all_index: Enable key-value and full-text indexing
no_index: Disable indexing
content_index: Enable full-text indexing
key_index: Enable key-value indexing
string
n
all_index
If storage_type=cold, i.e., infrequent storage, then all_index and key_index will not be effective, meaning infrequent storage does not support key-value indexing.
period
Storage period, generally the range is 1 to 3600 days
3640 means permanent storage
number
n
3
1 to 3600 days
storage_type
Storage type of log topic, optional values.
hot: Standard Storage
cold: Infrequent Storage
string
n
hot
When it is cold, the minimum period is 7 days
hot_period
0: Disable log settlement
Non-0: Number of days of standard storage after enabling log settlement
HotPeriod must be greater than or equal to 7 and less than Period, effective only when StorageType is hot
number
n
0
-
tag_dynamic
Add dynamic tags to the log topic. Use with the extract_tag() function to extract tag KV from log fields. For example: tag_dynamic=extract_tag(v("pd"),v("env"),v("team"), v("person"))
string
n
-
No more than 10 pairs of tags with tag_static
tag_static
Add static tags to the log topic. For example: tag_static="Ckafka:test_env,developer_team:MikeWang"
string
n
-
No more than 10 pairs of tags with tag_dynamic

Sample

Raw Log
[
{
"pd": "CLB",
"dateTime": "2023-05-25T00:00:26.579"
},
{
"pd": "Ckafka",
"time": "2023-05-25T18:00:55.350+08:00"
},
{
"pd": "COS",
"time": "2023-05-25T00:06:20.314+08:00"
},
{
"pd": "CDN",
"time": "2023-05-25T00:03:52.051+08:00"
}
]
Processing rules
log_auto_output(v("pd"),"My Log Set",index_options="content_index", period=3)
Processing results: Four log topics were automatically generated, namely "CLB", "Ckafka", "COS", and "CDN". The log set name is "my log set".


Function log_split

Function definition

This function is used to split a row of log into multiple rows of logs based on the value of a specified field by using a separator and JMES expression.

Syntax description

log_split(Field name, sep=",", quote="\\"", jmes="", output="")

Field description

Parameter
Description
Type
Required
Default Value
Value Range
field
Field to extract
string
Yes
-
-
sep
Separator
string
No
,
Any single character
quote
Characters that enclose the value
string
No
-
-
jmes
JMES expression. For more information, see JMESPath.
string
No
-
-
output
Name of a single field
string
Yes
-
-

Sample

Example 1. Split a log whose field has multiple values
{"field": "hello Go,hello Java,hello python","status":"500"}
Processing rule:
// Use the separator "," to split the log into 3 logs.
log_split("field", sep=",", output="new_field")
Processing result:
{"new_field":"hello Go","status":"500"}
{"new_field":"hello Java","status":"500"}
{"new_field":"hello python","status":"500"}
Example 2. Use a JMES expression to split a log
{"field": "{\\"a\\":{\\"b\\":{\\"c\\":{\\"d\\":\\"a,b,c\\"}}}}", "status": "500"}
Processing rule:
// The value of `a.b.c.d` is `a,b,c`.
log_split("field", jmes="a.b.c.d", output="new_field")
Processing result:
{"new_field":"a","status":"500"}
{"new_field":"b","status":"500"}
{"new_field":"c","status":"500"}
Example 3. Split a log that contains a JSON array
{"field": "{\\"a\\":{\\"b\\":{\\"c\\":{\\"d\\":[\\"a\\",\\"b\\",\\"c\\"]}}}}", "status": "500"}
Processing rule:
log_split("field", jmes="a.b.c.d", output="new_field")
Processing result:
{"new_field":"a","status":"500"}
{"new_field":"b","status":"500"}
{"new_field":"c","status":"500"}

Function log_drop

Function definition

This function is used to delete logs that meet a specified condition.

Syntax description

log_drop(Condition 1)

Field description

Parameter
Description
Type
Required
Default Value
Value Range
condition
Function expression whose return value is of bool type
bool
Yes
-
-

Sample

Delete logs where status is 200 and retain other logs.
Raw log:
{"field": "a,b,c", "status": "500"}
{"field": "a,b,c", "status": "200"}
Processing rule:
log_drop(op_eq(v("status"), 200))
Processing result:
{"field":"a,b,c","status":"500"}

Function log_keep

Function definition

This function is used to retain logs that meet a specified condition.

Syntax description

log_keep(Condition 1)

Field description

Parameter
Description
Type
Required
Default Value
Value Range
condition
Function expression whose return value is of bool type
bool
Yes
-
-

Sample

Retain logs where status is 500 and delete other logs.
Raw log:
{"field": "a,b,c", "status": "500"}
{"field": "a,b,c", "status": "200"}
Processing rule:
log_keep(op_eq(v("status"), 500))
Processing result:
{"field":"a,b,c","status":"500"}

Function log_split_jsonarray_jmes

Function definition

This function is used to split and expand the JSON array in the log according to JMES syntax.

Syntax description

log_split_jsonarray_jmes("field", jmes="items", prefix="")

Field description

Parameter
Description
Type
Required
Default Value
Value Range
field
Field to extract
string
Yes
-
-

Sample

Example 1 Raw log:
{"common":"common","result":"{\\"target\\":[{\\"a\\":\\"a\\"},{\\"b\\":\\"b\\"}]}"}
Processing rule:
log_split_jsonarray_jmes("result",jmes="target")
fields_drop("result")
Processing result:
{"common":"common", "a":"a"}
{"common":"common", "b":"b"}
Example 2 Raw log:
{"common":"common","target":"[{\\"a\\":\\"a\\"},{\\"b\\":\\"b\\"}]"}
Processing rule:
log_split_jsonarray_jmes("target",prefix="prefix_")
fields_drop("target")
Processing result:
{"prefix_a":"a", "common":"common"}
{"prefix_b":"b", "common":"common"}

Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback