tencent cloud

LLM Label Extractor Node
Last updated:2026-02-03 14:20:24
LLM Label Extractor Node
Last updated: 2026-02-03 14:20:24

Node Function

LLM Label Extractor Node belongs to Information Processing Node, supporting label extraction from pending content configured by the user. For example, extracting specific device models from maintenance manuals or case occurrence times from case details.




Directions

Input Variables

Input variables only take effect inside the node and cannot be used cross-node. Support up to 50 input variables to meet multi-input variable scene required. Click add and configure as follows to add input variables.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required.
Description
Description of this variable. Optional
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required
Type
The data type of this variable cannot be selected and defaults to the variable type "refer" or the string type "input".

Model.

Supports selecting LLMs with usage permissions under the current account.

Content of the label to be extracted

The user expects the LLM to extract tag content. Here, it supports direct import of variables, manual input of content, or a mix of variables and text content.

Label

Important information extracted from pending extraction content, such as the "first-tier city" tag from a news article. Click "Add" to perform label variable configuration.
Configuration
Description
Tag name
Label name, please use semantic content to help LLM understanding.
Label type
Label supports data types: string, int, float, bool.
Label description
Describe the tag purpose in natural language to help LLM understanding. Use intelligent one-click optimization.
Example value
Value examples for the label to help LLM understanding identify which content will be extracted.

Prompt

As a supplementary input for tag extraction in LLM, it supports user customization of overall constraints on label extraction, such as context scope limitation, tag dependency/mutual exclusion rules, etc. Here, it supports direct import of variables, manual input of content, or a mix of input variables and text content.
Version: Supports saving the current prompt draft as a version and filling in the version description. Saved versions can be viewed and copied in View Version History, which only shows versions created under the current prompt box. Supports selecting two versions in Content Comparison to view their prompt content differences.
Template: A predefined role command format template. It is recommended to fill in according to the template for better effect. After writing the command, you can also click Template > Save as Template to save the written command as a template.
AI One-click optimization: After completing the initial character design, you can click One-click optimization to optimize the content. The model will optimize the settings based on the input content, enabling it to better meet the requirements.
Note:
The AI One-click optimization function will consume the user's token resources.

Intermediate Message

If the node output is time-consuming, it supports user customization of intermediate messages to ease wait pressure, with non-streaming output and support for references to preceding node variables.

Output Variable

The output variable processed by this node defaults to the configured label, as well as runtime error info Error (data type is object, this field is empty when running normally). Manual addition is not supported.




Handling error

Exception handling can be enabled manually (off by default), supporting connection timeout trigger handling, exception retry, and exception handling method configuration. The configuration content is in the table below.
Currently only support duration set by the user for "connection timeout trigger handling", other exceptions are automatically identified by the platform.
Configuration
Description
Timeout trigger handling
The maximum duration for node operation. Exceeding it will trigger exception handling. The default timeout value for the large model tag extraction node is 300s, with a timeout setting range of 1-600s.
Max Retry Attempts
The maximum number of times to rerun when the node is running exceptionally. If retries exceed the set number of times, consider the node call failed and execute the exception handling method below. The default is 3 times.
Retry Interval
Interval between each rerun, default is 1 second.
Exception Handling Method
Support three types: "Output Specific Content", "Execution Exception Flow", and "Interrupt Flow".
Exception Output Variable
Select the exception handling method as "Output Specific Content". The output variable returned when retries exceed the maximum number.



Select the exception handling method as "Output Specific Content". When an exception occurs, the workflow will not be interrupted. After node failed retry, it will return directly the output variable and variable value set by the user in the output content.
Select the exception handling method as "Execution Exception Flow". When an exception occurs, the workflow will not be interrupted. After node failed retry, it will execute the exception handling process customized by users.

Select the exception handling method as "Interrupt Flow". There are no more settings. When an exception occurs, the workflow execution is interrupted.

Application Example

Create a beverage review analytics assistant to extract drink tags from user reviews for subsequent analysis.



LLM Label Extractor Node configuration is as follows:





Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback