tencent cloud

Large Model Knowledge Q&A Node
Last updated:2025-08-27 15:02:44
Large Model Knowledge Q&A Node
Last updated: 2025-08-27 15:02:44

Node Function

The LLM Knowledge Q&A Node belongs to Information Processing Node, supporting users to enter a question and configure the search scope to perform knowledge Q&A, outputting the reply content generated by the large language model.




Directions

Input Variables

Input variables take effect only within the same node and cannot be used cross-node. Support up to 50 input variables to meet scene requirements. Click "Add" to configure input variables as follows.
Configuration
Description
Variable Name
The variable name can only contain letters, digits, or underscores, must start with a letter or underscore, and is mandatory.
Description
Description of this variable. Optional.
Data source
The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value.
Type
The data type of this variable is unselectable and defaults to the variable type of "refer" or the string type of "input".

Knowledge Q&A

Model

Support selecting large language models that the current account has usage permissions for.

User Question

Represents the question in knowledge Q&A, allowing users to input questions they want the LLM to retrieve answers for. Here supports direct import of variables, manual input of content, or a mix of variables and text content. Typical use cases include:
Scenario 1. Perform knowledge retrieval with direct reference to "user dialogue of this round"
Assume the input variable (such as Query) is configured at the top, and refer to the system variable SYS.UserQuery.
At this point "user question" can configure as: Query, indicate usage of current user dialogue of this round for QA.
Scenario 2. Combine the "output variable of the preceding node" to concatenate content
Assume the input variable (such as TypeId) is configured at the top, and refer to the preceding node variable "device model".
At this point, to ask about the warranty period corresponding to the device model, the "user question" can be configured as: What is the warranty period for the TypeId device?

Prompt

Support users to supplement input model prompts, such as content sorting, reply standards and limitations, output format requirements, to enhance QA accuracy. Here supports direct import of variables, manual input of content, or a mix of variables and text content.

Knowledge

Configure the search scope for knowledge, including "All Knowledge" and "By Knowledge Base".
For "By Knowledge Base" search, you can select the corresponding knowledge base or add a new knowledge base. Within each knowledge base, the search scope can be set to "All knowledge", "By Specific Knowledge", or "According to the label".
Configuration
Description
All knowledge
Indicate to retrieve all knowledge in the knowledge base.
By Specific Knowledge
For documents, Allows manual selection of documents for knowledge retrieval. For QAs, support enabling or disabling QAs. If enabled, it will recall all QAs.
According to the label
Indicates retrieving the knowledge scope by document Tag, supporting input of fixed Tag values or referencing variables as Tag values.
Referencing variables as Tag values can more flexibly control the knowledge search scope. Typical scenario: Targeting employees from different departments, case-sensitive knowledge scopes. At this point, the employee department can be imported into the system via API parameter method, while tagging knowledge with different department Tags. During workflow invocation, dynamically adjust the search scope based on the employee department.
Configure as follows by document and QA:



Configure by tag as follows:




Advanced Settings

Set up retrieval strategy, document settings, and QA settings for knowledge QA, and customize search configuration.
Configuration
Configuration options
Description
Retrieval policy settings
Hybrid Search
Execute keyword retrieval and vector retrieval simultaneously. Recommended for scenarios involving string and semantic association, delivering better comprehensive results.
semantic retrieval
Scenarios with low vocabulary overlap between queries and text segments requiring semantic matching.
Excel Retrieval Enhancement
enabled
Once enabled, query and compute on Excel spreadsheets is supported based on natural language, but may affect application reply duration.
Document settings
Number of documents recalled
Retrieve the top N document fragments with the highest matching degree, default is 5, max is 10.
Document retrieval accuracy
Based on settings, the found text segments will be returned to the LLM as a reply reference. A lower value means more segments are recalled, but it may affect accuracy. Content below the matching degree will not be recalled. Default is 0.2, max is 0.99.
Q&A settings
Number of Q&A recalled
Retrieve the top N QA fragments with the highest matching degree, default is 3, max is 5.
Q&A retrieval accuracy
Based on settings, the found text segments will be returned to the LLM as a reply reference. A lower value means more segments are recalled, but it may affect accuracy. Content below the matching degree will not be recalled. Default is 0.7, max is 0.99.

Output Variable

The output variable processed by this node defaults to the LLM's thinking process, output content after running, and retrieved knowledge fragments, as well as runtime Error messages (data type is object, this field is empty when running normally). Manual addition is not supported.




Handling error

Exception handling can be enabled manually, supporting exception retry and output content configuration for anomalies. The configuration content is as follows.
Configuration
Description
Max Retry Attempts
The maximum number of retries when the node is running exceptionally. If retries exceed the set number of times, consider that node call failed and return the "output variable for anomaly" content, defaulting to 3 times.
Retry Interval
Interval between each rerun, default is 1 second.
Exception Output Variable
The output variable returned when retries exceed the maximum number.




Application Example

Create an Tencent Cloud Agent Development Platform (Tencent Cloud ADP) knowledge Q&A assistant to provide users with replies targeting workflow issues on the platform.




FAQs

What is the difference between LLM knowledge Q&A and knowledge retrieval?
The LLM knowledge Q&A node will summarize the retrieved knowledge and generate a final reply, while the knowledge retrieval node only returns the retrieved knowledge fragments.
Was this page helpful?
You can also Contact Sales or Submit a Ticket for help.
Yes
No

Feedback