tencent cloud

Large Model Knowledge Q&A Node
Last updated:2026-02-03 17:56:24
Large Model Knowledge Q&A Node
Last updated: 2026-02-03 17:56:24

Node Function

The LLM Knowledge Q&A Node belongs to Information Processing Node, supporting users to enter a question and configure the search scope to perform knowledge Q&A, outputting the reply content generated by the large language model.




Directions

Input Variables

Input variables take effect only within the same node and cannot be used cross-node. Support up to 50 input variables to meet scene requirements. Click "Add" to configure input variables as follows.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required.
Description
Description of this variable. Optional
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required.
Type
The data type of this variable cannot be selected and defaults to the variable type of "refer" or the string type of "input".

Knowledge Q&A

Model.

Supports selecting large models with usage permissions under the current account. Meanwhile, it supports configuring three advanced settings related to large models: temperature, Top_P, and maximum reply tokens. Among them, temperature is used to control the randomness of generated content, Top_P is used to control the variety of generated content, and maximum reply tokens are used to control the maximum length of content generated by the model.

User Question

Represents the question in knowledge Q&A, allowing users to input questions they want the LLM to retrieve answers for. Here supports direct import of variables, manual input of content, or a mix of variables and text content. Typical use cases include:
Scenario 1. Perform knowledge retrieval with direct reference to "user dialogue of this round"
Assume the input variable (such as Query) is configured at the top, and refer to the system variable SYS.UserQuery.
At this point "user question" can configure as: Query, indicate usage of current user dialogue of this round for Q&A.
Scenario 2. Combine "output variable of the preceding node" to concatenate content
Assume the input variable (such as TypeId) is configured at the top, and refer to the variable "device model" of the preceding node.
At this point, to ask about the warranty period corresponding to the device model, the "user question" can configure as: How long is the warranty period for TypeId?

Prompt

Support users to supplement input model prompt content, such as content sorting order, reply requirements and limitations, output format requirements, to enhance Q&A accuracy. Here supports direct import of variables, manual input of content, or a mix of variables and text content.
Version: Support saving the current prompt content draft as a version and fill in the version description. Saved versions can be viewed and copied in View Version History. The version history only shows versions created under the current prompt content box. Support selecting two versions in Content Comparison to view their prompt content differences.
Template: Set the role directive format template. It is recommended to fill in according to the template for better effect. After writing the directive, you can also click Template > Save as Template to save the written directive as a template.
AI One-click optimization: After completing the initial character design, click One-click optimization to optimize the character design content. The model will optimize the setting based on the input content, enabling it to better meet the corresponding requirements.
Note:
The AI One-click optimization function will consume the user's token resources.

Knowledge

Configure the search scope for knowledge, including "All Knowledge" and "By Knowledge Base".
For "By Knowledge Base" search, you can select the corresponding knowledge base or add a new knowledge base. Within each knowledge base, the search scope can be set to "All knowledge", "By Specific Knowledge", or "By Tag".
Configuration
Description
All knowledge
Indicate to retrieve all knowledge in the knowledge base, including documents, Q&A, and database.
By Specific Knowledge
Specific knowledge is divided into documents, Q&A, and database.
For documents, allows manual selection of documents for knowledge retrieval. For Q&As, support enabling or disabling Q&As. If enabled, it will recall all Q&As. For databases, allows manual selection of databases for knowledge retrieval.
According to the label
Indicates retrieving knowledge scope by document tag, support input fixed tag value or refer to the variable as tag value.
Referencing variables as tag values can more flexibly control the knowledge search scope. Typical scenario: targeting employees from different departments, distinguishing different knowledge scopes. At this point, the employee's department can be imported into the system via API parameter method, meanwhile tagging knowledge with different department tags. During workflow invocation, dynamically adjust the search scope based on the employee department.
Configure specific knowledge as shown below. Among them, document option and database option are empty in default state, manually configurable for document addition and data table. In the database, support selecting up to 10 data tables from the same database, no support for cross-database.



Configure by tag as follows:




Advanced Settings

Set retrieval strategy, document settings and Q&A settings for knowledge Q&A, and perform custom search configuration.
Configuration
Configuration options
Description
Retrieval Stra tegy Settings
Hybrid Retrieval
Execute keyword retrieval and vector retrieval simultaneously. Recommended for use in scenarios where string and semantic association is required, with better comprehensive effect.
Semantic Retrieval
Scenarios with low vocabulary overlap between queries and text segments requiring semantic matching.
Excel Retrieval Enhancement
enabled
Once enabled, it supports querying and computing in Excel spreadsheets based on natural language, but may affect the reply duration.
Document Settings
Number of Recalled
Retrieve the top N document fragments with the highest matching degree, default is 5, max is 10.
Retrieval Accuracy
Based on settings, the found text segments will be returned to the LLM as a reply reference. A lower value means more segments are recalled, but it may affect accuracy. Content below the matching degree will not be recalled. Default is 0.2, max is 0.99.
Q&A Settings
Number of Recalled
Retrieve the top N Q&&A fragments with the highest matching degree, default is 3, max is 5.
Retrieval Accuracy
Based on settings, the found text segments will be returned to the LLM as a reply reference. A lower value means more segments are recalled, but it may affect accuracy. Content below the matching degree will not be recalled. Default is 0.7, max is 0.99.

Intermediate Message

If the node output is time-consuming, it supports user customization of intermediate messages to ease wait pressure, with non-streaming output and support for references to preceding node variables.

Output Variable

The output variable processed by this node defaults to the LLM's thinking process, output content after running, and retrieved knowledge fragments, as well as runtime Error messages (data type is object, this field is empty when running normally). Manual addition is not supported.




Handling error

Exception handling can be enabled manually (off by default), supporting timeout-triggered handling, exception retry, and exception handling method configuration. The configuration content is in the table below.
Currently only support duration set by the user for "timeout-triggered handling", other exceptions are automatically identified by the platform.
Configuration
Description
Timeout-triggered handling
The maximum duration for node operation triggers exception handling when exceeded. The default timeout value for the large model knowledge Q&A node is 300s, with a timeout setting range of 1-600s.
Max Retry Attempts
Maximum number of times to rerun when the node runs exceptionally. If retries exceed the set number of times, consider that node call failed and execute the exception handling method below. Default is 3 times.
Retry Interval
Interval between each rerun, default is 1 second.
exception handling method
Support three types: "output specific content", "execution anomaly process", and "interrupt process".
Exception Output Variable
When the exception handling method is set to "output specific content", the output variable returned when retries exceed the maximum number.

When the exception handling method is set to "output specific content", the workflow will not be interrupted after an exception occurs, and the node will return directly the output variable and variable value set by the user in the output content after exception retry;
When the exception handling method is set to "execution anomaly process", the workflow will not be interrupted after an exception occurs, and the user-defined exception handling process will be executed after node exception retry.

When the exception handling method is set to "interrupt process", there are no more settings, and the workflow execution will be interrupted after an exception occurs.

Application Example

Create an Tencent Cloud Agent Development Platform (Tencent Cloud ADP) knowledge Q&A assistant to provide users with replies targeting workflow issues on the platform.




FAQs

What is the difference between LLM knowledge Q&A and knowledge retrieval?
The LLM knowledge Q&A node will summarize the retrieved knowledge and generate a final reply, while the knowledge retrieval node only returns the retrieved knowledge fragments.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback