Node Function
Large Language Model (LLM) Intent Recognizer Node belongs to Information Processing Node, leveraging LLM to auto infer the user's potential goal or intent from input. It supports intent description and addition through writing prompt, then routes to subsequent node based on intent category.
Directions
Input Variables
Input variables take effect only within the same node and cannot be used cross-node. Support up to 50 input variables to meet scene requirements. Click "Add" to configure input variables as follows.
|
Variable Name | The variable name can only contain letters, digits, or underscores, must start with a letter or underscore. Required. |
Description | Description of this variable. Optional |
Data source | The data source of this variable supports two options: "refer" and "input". "Refer" allows selecting output variables from all preceding nodes, while "input" supports manually filling in a fixed value. Required. |
Type | The data type of this variable cannot be selected and defaults to the variable type "refer" or the string type "input". |
Model.
Supports selecting LLMs with usage permissions under the current account.
Content to extract intent from
As input for the intent recognition LLM. Supports user input for expected intent recognition content. Here, it supports direct import of variables, manual input, or a mix of variables and text content.
Intent
For intent categorization, supports user manually filling in possible intent definitions, with a limit of 20 intents. Recommend configuring according to the template format, complete each intent with its intent name, description, and add several intent examples to help the LLM better understand the intent.
Intent Prompt Example:
##Intent Name: Borrow a Book
##Intent Description: In a library scenario, the user indicates the intent to borrow a book.
##Intent Example: I want to borrow a book, borrow a book, I'd like to borrow a book, help me borrow a book
##Intent Name: Return a Book
##Intent Description: In a library scenario, the user indicates the intent to return a book.
##Intent example: I want to return a book, return this book, I'd like to return "Harry Potter", return the book "The Little Prince"
##Intent Name: Rules and Regulations Inquiry
##Intent Description: In a library scenario, the user asks about library rules and regulations.
##Intent Example: How many books can I borrow at most, when does the library open, how long can I borrow Chinese books, how long can I borrow foreign books
Once configured, connect the intent recognition node to other nodes to form a complete call trace.
Each intent category of the intent recognition node must be connected to the subsequent processing node, otherwise the workflow cannot be triggered when a cache hit occurs in this category.
There is a default "other intent". When intent recognition is unable to hit any user-defined intent, the flow proceeds to the "other intent" item for subsequent process.
Prompt
As supplementary input for the intent recognition LLM, it supports user input of priority rules, conflict resolution rules, output requirements, etc., to enhance intent matching accuracy. Here, it supports direct import of variables, manual input, or a mix of variables and text content.
Version: Supports saving the current prompt draft as a version and filling in the version description. Saved versions can be viewed and copied in release log, which only shows versions created under the current prompt box. Supports selecting two versions from content comparison to view their prompt content differences.
Template: A preset role instruction format template. It is advisable to fill in according to the template for better effect. After writing the instruction, you can also click Template > Save as Template to save the written instruction as a template.
AI One-Click Optimization: After completing the initial character design, you can click One-Click Optimization to optimize the content of the character design. The model will optimize the setting based on the input content, enabling it to better meet the requirements.
Note:
The AI One-click optimization function will consume the user's token resources.
Intermediate Message
If the node output is time-consuming, it supports user customization of intermediate messages to ease wait pressure, with non-streaming output and support for references to preceding node variables.
Output Variable
The output variable processed by this node defaults to the hit intent serial number and intent name, as well as runtime Error info (data type is object, this field is empty during normal operation). Manual addition is not supported.
Handling error
Handling error
Exception handling can be enabled manually (off by default), supporting timeout trigger handling, exception retry, and exception handling method configuration. The configuration content is in the table below.
Currently only support duration set by the user for "timeout trigger handling", other exceptions are automatically identified by the platform.
|
Timeout trigger handling | The maximum duration for node operation triggers exception handling when exceeded. The default timeout value for large model intent recognition nodes is 300s, with a timeout setting range of 1-600s. |
Max Retry Attempts | The maximum number of times to rerun when the node is running exceptionally. If retries exceed the set number of times, consider the node call failed and execute the exception handling method below. Default is 3 times. |
Retry Interval | Interval between each rerun, default is 1 second. |
Exception Handling Method | Support three types: "Output Specific Content", "Execution Exception Flow", and "Interrupt Flow". |
Exception Output Variable | When the exception handling method is set to "Output Specific Content", the output variable returned when retries exceed the maximum number. |
When the exception handling method is set to "Output Specific Content", the workflow will not be interrupted after an exception occurs. The node will return the output variable and variable value set by the user in the output content directly after exception retry.
When the exception handling method is set to "Execution Exception Flow", the workflow will not be interrupted after an exception occurs. The user-defined exception handling process will be executed after node exception retry.
When the exception handling method is set to "Interrupt Flow", there are no more settings. The workflow execution will be interrupted after an exception occurs.
Application Example
Create an AI tool response assistant, identify different problem types through the intent recognition node, and group them for processing.
LLM Intent Recognition configuration as follows:
FAQs
What to do if intent recognition is inaccurate?
The accuracy of intent recognition is affected by multiple factors, such as model performance, category configuration and prompt. If the intent category does not meet expectations, you can refer to the following recommendations to enhance the accuracy of intent recognition node classification:
1. Adjust categories: Ensure intent categories are concise and clear, avoid blurry or confusing semantics. There should be clear differentiation between categories, decrease intersecting semantics to reduce the likelihood of classification confusion. For example, avoid overlap between "animal" and "animals and plants".
2. Provide specific user input examples to help the model understand and execute the classification task more accurately.
3. Switch models: Even if categorization and advanced settings are optimized, intent recognition accuracy is still unsatisfactory. We recommend trying different models to find the one that best meets your requirements.