Node Function
The LLM Node belongs to Information Processing Node. By calling the LLM, it processes various complex tasks based on input prompt content to meet users' business requirements and supports adjustments to model parameters for personalized output requirements.
Directions
Input Variables
Input variables take effect only within the same node and cannot be used cross-node. Support up to 50 input variables to meet scene requirements. Click "Add" to configure input variables as follows.
|
Variable Name | The variable name can only contain letters, digits, or underscores, must start with a letter or underscore, and is mandatory. |
Description | Description of this variable. Optional. |
Data source | The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. |
Type | The data type of this variable is unselectable, defaulting to the variable type "refer" or the string type "input". |
Model
Supports selecting large models with access permissions under the current account. Meanwhile, it supports configuring three advanced settings related to large models: temperature, Top P, and maximum reply tokens.
|
temperature | Control the randomness of generated content. Higher temperature (close to 1.0) makes the model's vocabulary selection more dispersed with increased randomness. Lower temperature (close to 0) makes the selection more deterministic, resulting in more conservative and stable output. |
Top_P | Control the variety of generated content. A smaller Top P value leads to more conservative selection, producing text that may be more coherent but lacks variety. A larger Top P value increases randomness and variety but may introduce unrelated words. |
Maximum reply Token | Used to control the length of model-generated content and underwrite the reply is no more than the set maximum Token quantity. |
Prompt
Prompt provides guidance for large model task processing. The clearer the prompt is written, the more the large model reply will meet expectations. The input box supports users to write the large model's Prompt and refer to the node's input variables by entering "/".
This section provides templates and intelligent one-click optimization. After clicking a template, you can copy the Prompt template for the desired business scenario to the input box, replace the actual business parameters for usage, or use intelligent one-click optimization to adjust the prompt content.
Output Variable
The output variable processed by this node defaults to the LLM's thinking process and output content after running, as well as runtime Error messages (data type is object, this field is empty when running normally). Manual addition is not supported.
Handling error
Exception handling can be enabled manually, supporting exception retry and output content configuration for anomalies. The configuration content is as follows.
|
Max Retry Attempts | The maximum number of times to rerun when the node is running exceptionally. If retries exceed the set number of times, consider the node call failed and return the "Exception Output Variable" content, defaulting to 3 times. |
Retry Interval | Interval between each rerun, default is 1 second. |
Exception Output Variable | The output variable returned when retries exceed the maximum number. |
Application Example
1. Parameter Normalization Scenario
In the hospital registration scenario, the preceding node extracts the hospital name from the user's dialogue, which may not be the full name, while the follow-up tool node requires the use of the full name for registration. At this point, you can use the model node to normalize the hospital name.
Assuming the preceding node extracts the hospital name and places it in the "Hospital Name to Normalize" variable, at this point the Prompt example is as follows:
Prompt example:
Process the parameter values in <parameter value> according to <format requirements>.
<parameter value>
{{hospital name to be normalized}}
</parameter value>
<parameter value range>
Beijing Jishuitan Hospital, Beijing Tiantan Hospital, Beijing Anzhen Hospital, Beijing Union Hospital, Dongfang Hospital of Beijing University of Chinese Medicine, Beijing Chaoyang Hospital, Chinese-Japanese Friendship Hospital, Beijing Shijitan Hospital, Peking University People's Hospital, Beijing 301 Hospital, Beijing Xuanwu Hospital, Beijing Children's Hospital, Peking University First Hospital.
</parameter value range>
<format requirement>
-If an element matching the <parameter value> exists in the <parameter value range>, return the best match in the list as the processed parameter value. Otherwise, keep the current parameter value unchanged.
-Only return the processed parameter value.
<format requirement>
2. Role Play Scenario
In a role play scenario, answer questions in the tone of a specific character. You can use the model node for character design.
Assuming the user question is placed in the "Pending Question" variable, at this point the Prompt example is as follows:
Prompt example:
Please role-play as tech superstar Elon Musk, who excels at disrupting traditional industries, founded world-changing companies, and has a massive fan count, widely admired.
[Elon Musk persona]
Elon Musk, with an innovative spirit, possesses a strong sense of technology and outstanding business acumen. You deeply understand advanced technology and future trends, always capturing people's imagination and filling fans with hope and looking forward to the future.
[Elon Musk's personality]
Elon Musk's response must be filled with wisdom, foresight, and appropriately integrated with technology elements. Your personality is tenacious, willing to share personal entrepreneurship experiences and future vision, addressing issues with innovative and pragmatic solutions.
[Elon Musk's speaking method]
Your common phrases include the following, you must use them:
-Elon Musk excels at telling events via innovative and forward-thinking ways to stimulate fans' imagination.
-Elon Musk gets familiar with advanced technology and refers to the latest tech trends and data when answering questions.
-Use inspiring language to propose suggestions and encouragement to fans, displaying your wisdom.
Answer the user question:
{Question to answer}
3. Article Writing Scenario
In culture and entertainment, you can define writing requirements through Prompt and use the large model to directly generate content.
At this point, the Prompt example is as follows:
Prompt example:
Write an article about "AI applications in the medical field" with a word count between 1500 and 2000. The article should include the following components:
1. Introduction: Provide a brief introduction to the background of AI and its importance in the medical field.
2. Main body:
First part: Describe the basic concept of AI and its main application scenarios in the healthcare field, such as diagnosis, treatment, and drug R&D.
Second part: Analyze the current status and development trends of AI in the healthcare field, including technological advancements, market size, and key participants.
Third part: Discuss the impact and significance of AI in the healthcare field, list related cases or data support, such as improving diagnostic accuracy and reducing medical costs.
3. Conclusion: Summarize the main viewpoints of the article and provide an outlook or recommendations for the future development of AI in the healthcare field.
Please ensure the article has a clear structure, rigorous logic, and smooth language, suitable for professional medical practitioners and readers interested in AI.
FAQs
The "Thought" field in the output variable contains the thinking process of deep thinking models (such as DeepSeek R1). For models without deep thinking capacity (such as DeepSeek V3), this field is empty.