tencent cloud

LLM Node
Last updated:2026-02-02 15:07:01
LLM Node
Last updated: 2026-02-02 15:07:01

Node Function

LLM Node belongs to Information Processing Node, processing various complex tasks by calling LLM based on input prompt content to meet users' business requirement, and supports adjustments to model parameters to obtain personalized output requirement.




Directions

Input Variables

Input variables take effect only within the same node and cannot be used cross-node. Support up to 50 input variables to meet scene requirements. Click "Add" to configure input variables as follows.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required.
Description
Description of this variable. Optional
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required
Type
The data type of this variable cannot be selected and defaults to the "refer" variable type or the "input" string type.

Model.

Supports selecting large models with access permissions under the current account. Meanwhile, it supports configuring three advanced settings related to large models: temperature, Top P, and maximum reply tokens.
Configuration
Description
Temperature
Used to control the randomness of generated content. The higher the temperature (close to 1.0), the more dispersed the model's vocabulary selection, with more randomness. The lower the temperature (close to 0), the more deterministic the model's selection, and the output content becomes more conservative and stable.
Top_P
Control the variety of generated content. A smaller Top P value leads to more conservative selection, producing text that may be more coherent but lacks variety. A larger Top P value increases randomness and variety but may introduce unrelated words.
maximum reply Token
Used to control the length of content generated by the model, underwriting the reply does not exceed the set maximum Token quantity.

Multimodal Input

Based on different model capabilities, large model nodes can accept multimodal input, including "visual input" and "audio input".




Visual Input

Visual input is used for reference input in vision understanding or image generation. When a model node's model has any one of the tags "Multimodal", "Video Understanding", "Image Understanding", or "Image Generation", it triggers the visual input option. Support up to 50 input variables.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required.
Description
Description of this variable. Optional
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required
Type
The data type of this variable cannot be selected and supports input of image, array<image>, video, and array<video> types.

Audio Input

Audio input is used for reference input in audio understanding or audio generation. When a model node's model has any one of the tags "full modality", "speech recognition", "audio understanding", or "text to speech", it triggers the audio input option. Support up to 50 input variables.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required.
Description
Description of this variable. Optional
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required
Type
The data type of this variable cannot be selected and supports input of audio and array<audio> types.

System Prompt

System Prompt sets the model's role, behavior modes, and output style, providing preset instructions for large model task processing. The more clearly the prompt content is written, the more the large model reply will behave as expected. The system prompt input box supports users to write prompts for the large model, and input variables of that node can be referred to by entering "/".
Version: Support saving the current prompt content draft as a version and filling in the version description. Saved versions can be viewed and copied in version history, which only shows versions created under the current prompt content box. Support selecting two versions in content comparison to view their prompt content differences.
Template: Predefined role instruction format template. It is recommended to fill in according to the template for better effect. After writing the instruction, you can also click Template > Save as template to save the written instruction as a template.
AI One-Click Optimization: After completing the initial character design, click One-Click Optimization to optimize the character design content. The model will optimize the setting based on the input content, enabling it to better meet the requirements.
Note:
The AI one-click optimization feature will consume the user's tokens resources.

User Prompt

User Prompt supports users' input of specific command requests or questions, and can reference the node's input variable by entering "/".
User Prompt likewise supports versions, templates, and the AI one-click optimization feature.

Intermediate Message

If node output is time-consuming, it supports user customization of intermediate messages to ease waiting pressure, with non-streaming output and support for references to preceding node variables.


Output Variable

When excluding file output, the output variable processed by this node defaults to the large model's thinking process and output content after running, as well as runtime Error messages (data type is object, this field is empty when running normally).
The output variable supports text, Markdown, and JSON formats. Among them, the JSON format supports user customization of output variables, which can be manually added or imported through JSON.



When the output contains files (such as images, documents, videos, audio), the output variable includes the GenFiles field in addition to the above content, for outputting file resources generated by the multimodal large model. When this field is present, only text format and Markdown format are supported, and output variable customization is not allowed.


Handling error

Exception handling can be enabled manually (off by default), supporting timeout trigger handling, exception retry, and exception handling method configuration. The configuration content is in the table below.
Currently only support duration set by the user for "timeout trigger handling", other exceptions are automatically identified by the platform.
Configuration
Description
Timeout trigger handling
The maximum duration for node operation, exceeding which triggers exception handling. The default timeout value for a large model node is 300s, with a timeout setting range of 1-600s.
Max Retry Attempts
Maximum number of times to rerun when the node is running exceptionally. If retry exceeds the set number of times, consider that node call failed and execute the exception handling method below. Default is 3.
Retry Interval
Interval between each rerun, default is 1 second.
Exception Handling Method
Support three types, including "Output Specific Content", "Execution Exception Flow", and "Interrupt Flow".
Exception Output Variable
When the exception handling method is set to "Output Specific Content", the output variable returned when retries exceed the maximum number.

When the exception handling method is set to "Output Specific Content", the workflow will not be interrupted after an exception occurs, and the node will return the output variable and variable value set by the user in the output content directly after retry.
When the exception handling method is set to "Execution Exception Flow", the workflow will not be interrupted after an exception occurs, and the user-defined exception handling process will be executed after node retry.

When the exception handling method is set to "Interrupt Flow", there are no more settings. The workflow execution will be interrupted after an exception occurs (interrupting the process may cause workflow data loss. Back up before the next operation).

Application Example

1. Parameter Normalization Scenario

In the hospital registration scenario, the preceding node extracts the hospital name from the user's dialogue, but it may not be the full name. The follow-up tool node requires the use of the full name for registration. At this point, you can use the model node to normalize the hospital name.



Assuming the hospital name extracted by the preceding node is placed in the "hospital name to be normalized" variable, the Prompt example is as follows:
Prompt example:
According to the <format requirements>, process the parameter values in the <parameter value>.

<parameter value>
{{hospital name to be normalized}}
</Parameter Value>

<Parameter Value Range>
Beijing Jishuitan Hospital, Beijing Tiantan Hospital, Beijing Anzhen Hospital, Beijing Union Hospital, Dongfang Hospital of Beijing University of Chinese Medicine, Beijing Chaoyang Hospital, Chinese-Japanese Friendship Hospital, Beijing Shijitan Hospital, Peking University People's Hospital, Beijing 301 Hospital, Beijing Xuanwu Hospital, Beijing Children's Hospital, Peking University First Hospital.
</Parameter Value Range>

<Format Requirement>
-If there is an element in the <Parameter Value Range> that matches the <Parameter Value>, return the best match in the list as the processed parameter value. Otherwise, keep the current parameter value unchanged.
-Only return the processed parameter value.
<Format Requirement>

2. Role Play Scenario

In a role play scenario, answer questions in the tone of a specific character. You can use the model node for character design.



Assuming the user question is placed in the "Pending response" variable, the Prompt example is as follows:
Prompt example:
Please role-play as tech superstar Elon Musk, who excels at disrupting traditional industries, founded world-changing companies, and has a massive fan count, widely admired.

[Elon Musk persona]
Elon Musk, full of innovative spirit, possesses a strong sense of technology and outstanding commercial acumen. You deeply understand advanced technology and future trends, always capturing people's imagination, making fans look forward to the future with hope.

[Elon Musk's personality]
Elon Musk's response must be full of wisdom, foresight, and appropriately integrate technology elements. Your personality is resilient, willing to share personal entrepreneurship experiences and future vision, addressing issues with innovation and a pragmatic approach.

[Elon Musk's speaking method]
Your common phrases include the following content, you must use:
-Elon Musk excels at telling events via innovative, forward-thinking ways to stimulate fans' imagination.
-Elon Musk gets familiar with advanced technology and refers to the latest tech trends and data when answering questions.
-Able to propose suggestions and encouragement to fans in inspiring language, displaying one's wise side.

Please answer the user question:
{{Pending question}}

3. Culture and Entertainment Writing Scenario

In the culture and entertainment writing scenario, you can define writing requirements through Prompt and use the large model to directly generate articles.



At this point, the Prompt example is as follows:
Prompt example:
Please write an article about "AI applications in the healthcare field" with a word count between 1500 and 2000. The article should include the following components:

1. Introduction: A brief introduction to the background of AI and its importance in the healthcare field.
2. Subject:
First part: Describe the basic concept of AI and its main application scenarios in the healthcare field, such as diagnosis, treatment, and drug R&D.
Second part: Analyze the current status and development trends of AI in the healthcare field, including technological advancements, market size, and key participants.
Third part: Discuss the impact and significance of AI in the healthcare field, list related cases or data support, such as improving diagnostic accuracy and reducing medical costs.
3. Conclusion: Summarize the main viewpoints of the article and provide an outlook or recommendations for AI's future development in the medical field.

Please ensure the article has a clear structure, rigorous logic, and smooth language, suitable for professional medical practitioners and readers interested in AI.

FAQs

The "Thought" field in the output variable contains the thinking process of deep thinking models (such as DeepSeek R1). For models without deep thinking power (such as DeepSeek V3), this field is empty.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback