Skip to main content

LLM Node

The LLM Node is a specialized node that leverages language models to generate responses based on conversation context. Unlike the Agent Node, the LLM Node does not make autonomous decisions about when to use tools or execute external actions. Instead, it focuses on generating high-quality text responses with flexible output formatting options.

Understanding the LLM Node

The LLM Node processes input from previous nodes, including the conversation history state.messages and any custom State, and uses a language model to generate a response based on a System Prompt and optional configuration settings. It is particularly useful for:

  • Generating responses when no external tools are required
  • Processing, analyzing, or transforming data in the State
  • Creating structured outputs in JSON format for consumption by other nodes

Inputs

RequiredDescription
System PromptYesText prompt that defines the LLM's role and response parameters.
Chat ModelNoA custom Chat Model (LLM) to use instead of the default one defined in the Start Node.
JSON SchemaNoA schema defining the structure of the LLM Node's response when JSON Structured Output is enabled.
Enable JSON Structured OutputNoA configuration to force the LLM Node's response to follow a predefined JSON schema.
Update StateNoA JSON object that defines how the LLM Node should update the custom State before passing it to the next node.

Outputs

The LLM Node can connect to the following nodes as outputs:

  • Agent Node: Connects to an Agent Node to continue the conversation with an agent that can access tools.
  • LLM Node: Routes the conversation flow to another LLM Node for additional processing or response generation.
  • Condition Agent Node: Connects to a Condition Agent Node to implement branching logic based on the agent's evaluation of the conversation.
  • Condition Node: Connects to a Condition Node to implement branching logic based on predefined conditions.
  • Loop Node: Connects to a Loop Node to implement repetitive processes based on specific conditions.
  • End Node: Connects to an End Node to conclude the conversational flow.

Features

JSON Structured Output

When the "Enable JSON Structured Output" option is enabled, the LLM Node can generate responses that conform to a predefined JSON schema. This is particularly useful when:

  • You need to extract specific data points from the LLM's analysis
  • Downstream nodes require structured input for processing
  • You want to ensure consistent response formats for various user queries

For example, if you're building a customer inquiry workflow, you might define a JSON schema that extracts the customer's issue type, urgency level, and relevant product information from their message.

State Management

The LLM Node can update the custom State based on the insights or analysis it generates. This allows the LLM Node to pass valuable information to subsequent nodes in the workflow without exposing that information directly to the user.

For example, an LLM Node might analyze a customer's sentiment and add a "sentimentScore" to the custom State, which a Condition Node could then use to route the conversation to different support agents based on the detected sentiment.

Best Practices

Clear system prompt

Craft a concise and unambiguous System Prompt that accurately reflects the LLM Node's role and capabilities. This guides the LLM Node's decision-making and ensures it acts within its defined scope.

Optimize for structured output

Keep your JSON schemas as straightforward as possible, focusing on the essential data elements. Only enable JSON Structured Output when you need to extract specific data points from the LLM's response or when downstream nodes require JSON input.

Strategic tool selection

Choose and configure the tools available to the LLM Node (via the Tool Node), based on the specific tasks it needs to perform. Avoid providing too many tools, which can lead to confusion and inefficiency.