Agentic NLU AI engine
⚠️ This feature is in public beta. You're encouraged to explore and share feedback. It’s production-ready, but may change as we improve based on real-world use.
Introducing Agentic NLU: Vonage’s most advanced AI engine yet
Powered by cutting-edge NLU and autonomous agents, Vonage's Agentic NLU AI engine enables virtual assistants to understand context, make decisions, and take action, turning static scripts into intelligent, goal-driven conversations.
Gated feature
Agentic NLU is currently a gated feature. For more information, please contact Vonage Support or your Account Manager.
How an AI Agent works
To fully leverage Agentic NLU, it is essential to understand that an AI Agent operates in a continuous loop of perception, planning, and execution.
Perception
The agent observes the user’s input.
Example input: "I want to fly to London tomorrow."
Goal identification
The agent identifies the user's specific objective based on the input received.
Goal: Book a flight to London for tomorrow.
State evaluation
The agent assesses what it knows vs. what it needs.
Current State: Missing departure time.
Planning
The agent generates a sequence of steps to reach the goal.
Steps: Ask for time → Search flights → Offer options → Confirm booking
Action selection
The agent selects the next best move based on the plan (e.g., asking a question or calling an API).
Action: “What time would you like to depart tomorrow?”
Execution
The agent acts and updates the state.
User response: “Evening.”
Goal check
The agent checks if the goal is met.
If the goal is met, the agent marks the goal as completed.
If the goal is not met, the agent loops back to Step 4 to refine the plan.
Why Agentic NLU is unique
Pro Tip
If your current flows rely heavily on complex NLU mapping and repetitive input nodes, switching to Agentic NLU can significantly reduce your maintenance burden.
Agentic NLU allows virtual agents (VAs) to engage in dynamic, natural interactions without relying on predefined flows. It is uniquely designed to provide human-like intelligence at every touchpoint while maintaining the reliability of deterministic actions.
For the VA builders, Agentic NLU represents a shift from explicit logic mapping to objective-oriented configuration:
De-cluttered flow design: A single Agentic Capture node can replace multiple Collect Input nodes. Additionally, Agentic nodes have the entire communication layer (Speak & Listen) preconfigured and running in the backend.
Low training overhead: Define intents with a simple description and a few examples rather than hundreds of user expressions.
Instant configuration: Add or update intents and parameters instantly using natural language instructions (LLM prompts), drastically reducing manual training data requirements.
Robust voice handling: Built-in fuzzy matching automatically corrects Automatic Speech Recognition (ASR) transcription errors, preventing unnecessary call failures.
Pre-configured governance: Each Agentic node includes built-in system instructions, goals, safety guardrails, and memory configurations.
End-to-End Agentic Architecture: The Agentic NLU Template enables you to route 100% of callers' conversations via AI Agents, providing human-like intelligence at every stage of the interaction.
Agentic NLU creates an authentic conversational experience where callers feel understood, not just processed.
Natural, free-flowing dialogue: Callers speak naturally without needing to follow a script or specific keywords.
Multi-parameter capture: An Agentic node can extract multiple pieces of information (e.g., date, destination, and time) from a single caller utterance.
Dynamic context switching: If a caller shifts topics mid-flow, the Agentic node recognizes the change request and transitions smoothly to the new topic without restarting the conversation.
Automatic contextual responses: The engine generates smart replies to handle ambiguity or invalid input, providing corrective guidance without manual scripting.
Proactive frustration detection: Identifies escalation triggers based on repeated failures or unmet goals before a caller even asks for a human.
Agentic NLU limitations
Available for voice agents and English language only.
An Agentic Response is automatically generated by LLMs and cannot be manually defined by Studio users.
Retry behavior and reconfirmation behavior are predefined and cannot be altered manually.
When using Agentic NLU, AI engine selection becomes fixed and cannot be changed.
Agentic NLU uses a different set of components and architecture from Hybrid NLU or Traditional NLU, preventing cross-engine migration.
Once an agent is created with Agentic NLU, it cannot be switched to another NLU engine.
Agents built with the Hybrid NLU or Traditional NLU engines cannot be imported or duplicated into Agentic NLU.
Similarly, agents built with Agentic NLU cannot be converted to Hybrid NLU or Traditional NLU.
Understanding the architecture: Agentic nodes
Agentic NLU relies on two specialized AI Agents implemented as Agentic nodes. These nodes structure how conversations are handled, ensuring that every caller's input is evaluated and routed appropriately through a continuous reasoning loop, enabling fluid, goal-driven interactions.
Agentic Classification node (The Hub).
As the Hub, this node serves as the assistant's central brain. It identifies the caller's intent or topic at each stage of the conversation and routes the flow accordingly. It includes key functionalities such as:
Intent recognition: Classifies the caller's input against Agentic Intents to determine the most relevant intent or topic.
Global Intent support: Recognizes and processes global intents (e.g., “Cancel,” “Talk to agent”) when configured.
LLM-powered matching: Employs large language models (LLMs) for prompt-based classification, requiring minimal training data.
Outcome handling: Delivers one of three outcomes, Success, Missed, or Failed, based on the classification result.
Intent storage & Agentic fallbacks: Records the detected intent for downstream use. If no match is found, it automatically generates an informed response suggesting available intent options.
Isolated testing: Features a Save and Test option for standalone classification performance testing during configuration.
Agentic Capture node (The Branch).
As a Branch, this node functions as the specialist. It is triggered by the Hub to gather the specific data points (parameters) needed to fulfill a chosen intent. It includes key functionalities such as:
Intent tracking: Continuously monitors the active intent and recognizes context changes (e.g., switching to a new intent during a conversation).
Multi-parameter capture: Acquires multiple parameters within a single process when required to complete an intent.
Retry handling: Facilitates parameter retries to ensure accurate capture of values in the desired format.
Reconfirmation support: Validates captured parameter values with callers, particularly valuable in voice interactions where ASR errors occur frequently.
Agentic responses: Automatically produces contextual replies to address incomplete or incorrect caller's inputs, maintaining conversation fluency.
Proactive escalation detection: Detects potential escalation scenarios during interactions and pinpoints the specific parameter that triggers them. Escalation is managed automatically — no additional setup needed.
The conversation remains within the Agentic Capture node (the Branch) until the task is complete or an escalation is triggered. If the AI Agent detects a topic shift, it triggers an “Intent Change” scenario. This seamlessly routes the caller back to the Agentic Classification node (the Hub) to re-evaluate the request and re-orient the flow.
Getting started
This guide provides step-by-step instructions for building, training, and optimizing Virtual Agents using the new Agentic NLU AI engine.
Choose the Right Agent Type

When creating an agent, you are prompted to select an Agent Type. To access the features described on this page, choose Telephony as your Agent Type.
Limited availability
Currently, Agentic NLU is only available for voice agents.
Choose the Right AI Engine

When prompted to select an AI engine, select Agentic NLU.
No backward compatibility
If you create an agent with the Agentic NLU AI engine, you cannot change it to Hybrid NLU/ Traditional NLU by editing or duplicating the agent, and vice versa.
Fill in the Details page
The following fields are available for configuration:
Region
List of available options
Select the geographical data center where your agent’s data and traffic will be processed. Available options are the United States and Europe.
Agent Name
Text
A unique internal label used to identify the agent within your dashboard.
Agent Description (Optional)
Text
A summary of the agent’s use case. Provides context to the AI engine to improve goal alignment and serves as an internal reference.
API Key
List of available options
Select the specific Vonage API Key that will be used to authenticate and bill the agent’s activity.
Language
List of available options
Define the primary language the agent will use to communicate and process NLU intents. Currently, only English (United States) is available
Voices
List of available options
Select the Text-to-Speech (TTS) voice profile and accent that best fit your brand's persona.
Time Zone
List of available options
Set the agent's local time zone, which dictates how time-based conditions and scheduling logic are executed.
You can view and edit these details later.
Language
Once selected, you cannot change the agent's language.
Choose the right VA template

You can create an Agentic NLU agent using two different approaches:
Agentic NLU Template: AI Studio provides a ready-made template that pre-configures the Agentic Classification and Agentic Capture nodes in an optimal "hub-and-branch" model. This is the fastest way to maximize the benefits of the Agentic NLU engine with a proven architectural structure.
Start from Scratch: For full control over your conversation logic, you can start from a blank canvas and manually add Agentic nodes from the node canvas. This allows you to integrate Agentic reasoning into existing rule-based workflows or create unique hybrid architectures.
Choose the right type of Event
To utilize Agentic NLU, you must select either Inbound Call or Outbound Call as your triggering event:
Inbound Call: The most common trigger; the agent activates immediately when a user calls the assigned number.
Outbound Call: Enables the agent to dial a user’s number to start a session proactively.
End Call: A post-session trigger that allows the agent to continue specific background tasks or data processing after the main call has ended.
API Event: Uses 3rd-party integrations to trigger a conversation based on external data or system prompts.
Event Limitation
End Call and API Event are currently unavailable for the Agentic NLU engine.
Designing the flow: Agentic intents, parameters, and nodes
Designing an Agentic NLU flow differs from Traditional or Hybrid NLU flows. Instead of mapping every possible scenario, you provide the agent with the "ingredients" (Agentic Intents and Parameters) and use Agentic nodes to manage the conversation autonomously.
In a typical Agentic flow, all conversation logic is driven by the interaction between these elements:
Preparation: Before placing nodes, define your Agentic Intents (to establish context) and Parameters (to identify the data that needs to be collected).
Routing (Classification): Start your flow with an Agentic Classification node. This acts as the "hub," evaluating user input against your defined intents to determine the next step.
Fulfillment (Capture): Link identified intents to Agentic Capture nodes. These "specialists" handle multi-turn collection and re-confirmation, ensuring all required parameters are accurately gathered.
Path Management: Configure all exit paths from these nodes. You must specifically account for Intent Changes (re-routing), Escalation (human handoff), and Failed outcomes.
Pro tip
Always route the Escalation and Failed paths to a dedicated fallback flow. This should include a seamless human handoff or a clear explanation to the user to prevent a "dead-end" experience.
Add an Agentic Intents
Agentic Intents define the goals your assistant can recognize. Each intent includes:
A name (short and descriptive)
A natural-language description of the intent (similar to an LLM prompt)
You can define Agentic Intents in the Properties panel of your AI Studio agent. Once created, they can be reused across the agent’s flow.
Best practice
Always add “End conversation” and “Talk to human agent” intents to your agents to improve the performance of the Agentic NLU AI engine.

Add Agentic Parameters
Agentic Parameters describe the information your Virtual Assistant needs to collect. Because they do not rely on entities, you can use natural language instructions instead of maintaining complex entity lists.
You can define Agentic Parameters in the Properties panel of your AI Studio agent. Once created, they can be reused across the agent’s flow.
There are three types of these parameters: Custom, System, and User parameters.
Custom
Manually created by builders to store session-specific data.
Session-only: Reset once the interaction ends.
Yes: Name and Value.
Booking_ID
System
Automatically generated for every Virtual Agent.
Session-only: Reset once the interaction ends.
No.
Agent_ID
User
Used for data that remains constant across multiple interactions.
Cross-Session: Follows the caller across different VAs in the same account.
Yes: Name and Value.
User_ID, User_Language_Preference
Conditions node: Parameter logic
The Conditions node supports both Custom and User parameters, applying qualitative and quantitative logic across all data types.
Type compatibility:
Quantitative Logic: Use
Number,Date,Time, orDateTimefor mathematical or chronological comparisons (e.g., greater than or before).Qualitative Logic: The
Stringtype is ideal for text-based matching but does not support quantitative operations.
Setting up Custom parameters
Options
Text
A name for your parameter.
Type
List of available options
Defines the data format the system expects to store. Available options are String, Number , DateTime, Date, Time.
Description
Text
Explains the parameter's role to the AI. This provides general context so the agent understands the "job" of this parameter.
Format
Text
The strict logic the AI uses to extract and validate input. This is where you define exactly how the parameter value must be structured.
Value
Text
Allows you to specify a pre-defined value of the parameter.
Date, Time, and DateTime formats
These parameter types are automatically stored in standardized ISO formats:
Date: YYYY-MM-DDT00:00:00±HH:MMTime: 1970-01-01THH:MM:SS±HH:MMDateTime: YYYY-MM-DDTHH:MM:SS±HH:MM
Note: When selecting Date, Time, DateTime, or Number, the Format field locks automatically because these types rely on fixed system logic. The Description field remains available for providing context to the AI.
Focus on the Role: Tell the AI what the information is, not how to get it.
Example: "This is the caller's 8-digit bank account number used for identity verification."
Keep it Concise: Limit this to 1–2 lines.
Avoid Logic: Do not define extraction rules here; the AI will ignore them in this field. Use the Format field for rules.
The AI relies strictly on this field to pull data from a caller's sentence. To ensure high accuracy, include the following in your instructions:
Length: State exact, minimum, or maximum character counts.
Composition: Define allowed characters (e.g., "digits only," "uppercase letters").
Structure: Describe the sequence (e.g., "starts with two letters followed by four numbers").
Restrictions: Explicitly state what is forbidden (e.g., "no spaces," "no decimals").
Examples: Always provide 2–3 correct examples and 2–3 incorrect examples to "prime" the LLM.
Pro Tip
You can use generative AI tools (like Google Gemini, ChatGPT, or Claude) to write the LLM prompt for your Format field:
Copy the Format field template (Length, Composition, Structure, Restrictions, Examples) into your preferred AI tool.
Describe your parameter requirements in plain English and ask the AI to generate the prompt based on that template.
Use this as a head start, then refine the prompt based on your actual testing and performance logs in AI Studio.

System Parameters
Available System Parameters
Below is the list of System parameters available in the Agentic NLU AI engine:
CALLER_PHONE_NUMBER
String
The caller's telephone number.
CALL_START_DATE
DateTime
The date of the call.
CALL_START_TIME
DateTime
The time when the agent picked up the call.
CALL_DIRECTION
String
Categorizes the interaction as 'Inbound' or 'Outbound'. Used to enforce call-type restrictions and ensure the agent only executes tasks relevant to the current call direction.
CALL_TRANSCRIPTION
String
Stores the transcription of the interaction.
TRIGGERED_BY_SESSION_ID
String
Identifies the specific session that initiated or "triggered" the current interaction.
VAPI_CALL_ID
String
The Call ID from the Voice API in the dashboard. This allows you to match calls between the Studio log and the Voice API dashboard log.
SESSION_ID
String
A sequence of numbers and letters to identify the specific session.
AGENT_ID
String
The agent's ID.
CONVERSATION_ID
String
The Conversation ID/UUID sent from Vonage API.
AGENT_PHONE_NUMBER
String
The agent's virtual phone number.
CALL_START_DATETIME
DateTime
The date and time that the interaction started.
User Parameters
There are two preconfigured user parameters available in AI Studio:
Account_Name for storing the business account name related to the call.
Phone_Number for storing the caller's phone number.
Limitation
User parameters do not have Description and Format fields and cannot be used with the Agentic nodes. Instead, user parameters need to be set by using the “Set Parameter” node.
Add an Agentic Classification node
The node drawer is divided into three functional sections:
Setup: Define the inputs, the scope of intents, and escalation thresholds.
Configuration: Fine-tune the Communication Layer and LLM response behavior.
Test: Perform isolated testing of your classification logic.
User Input
List of parameters
A parameter that stores the caller's response and serves as the primary input for the Agentic Classification node.
Agentic Intents
List of options
A list of intents in scope for this node. The node will only search for matches within this list.
Maximum Number of Attempts
1-6
The maximum number of attempts the node makes to identify an intent during a live conversation. If the number is exceeded, the node triggers the Escalation path. The default value is 3.
Detected Intent
List of parameters
A parameter that stores the intent identified by the engine.
Agentic Response
List of parameters
A parameter that stores the response generated by the engine, particularly useful when no intent is initially detected.
Escalation Reason
List of options
A parameter that stores why an escalation was triggered. The available reasons are:
“ATTEMPTS_EXCEEDED”: the number of attempts exceeded the defined maximum
“USER_DENIAL”: a caller denied to provide a valid response
“USER_INFORMATION_GAP”: a caller either did not have the required information, cannot find it, or needs clarification about what is being asked
“HUMAN_HANDOFF”: a caller explicitly requests a human, support agent, or other channel
“USER_INITIATED_TERMINATION”: a caller expresses emotional frustration, dissatisfaction, or explicitly indicates a desire to stop or end the conversation
Communication Layer
The Communication Layer is a permanent background flow of Speak and Listen nodes that manages the autonomous conversation loop. It automatically delivers the agent's responses and captures user input at each stage of the interaction.
Always Active: To ensure consistent interaction, the Communication Layer cannot be disabled.
Fixed Audio Settings: The Speak node settings within this layer are pre-configured and cannot be modified.
Within the Agentic Classification node configuration, you can adjust the following Listen node settings for this layer:
Communication Layer
Detect Slience
0.4-5 seconds
The amount of time the agent waits after the user stops speaking to determine whether the parameter is filled and to continue the flow. The default value is 0.4 seconds.
No input
1-60 seconds
The amount of time the agent waits for a caller's input. If the time limit is exceeded, the agent triggers the Retry logic until it reaches the last retry, then moves to the No Input flow. The default value is 10 seconds.
Context keywords
Text
Context keywords improve recognition quality if certain words are expected from the user.
Enable Node Noise Sensitivty
On/Off 10-100
When no value is provided here, the agent-level sensitivity settings will apply to the node as well. When a value is provided for the Node noise sensitivity, these settings will override the agent-level sensitivity settings. The default value is 40.
Other configuration settings
Waiting Time
3-10 seconds
The maximum time to wait for the NLU engine response. If the time is exceeded, the Failed exit path is triggered. The default value is 5 seconds.
Agentic Response Guidelines
Text
You can provide natural language instructions to guide how the LLM crafts its replies. This field operates similarly to the Response Guidelines field in the Q&A node.
You can click Test to test your configured Agentic Classification node without linking it to other nodes.
In the Test section, you can:
Provide a value for “User Input”.
Check the generated “Detected Intent” and “Agentic Response”.
If you are unhappy with the results, you can update:
the Intent Description field from the Agentic Intents tab
the Agentic Response Guidelines field for a given Agentic Classification node

Agentic Classification node exit paths:
Intent: This path is triggered when the Agentic Classification node successfully detects an intent. Every intent selected under the Agentic Intents of an Agentic Classification node will have a unique exit path.
Escalation: This path is triggered when an escalation scenario is detected based on the caller's response.
Failed: This path is triggered when the node fails to run due to an internal error.
Pro tip
If the Failed exit path is triggered frequently, it likely indicates an Agentic NLU timeout. To resolve this, increase the Waiting time from the default 5 seconds up to a maximum of 10 seconds.
Add Agentic Capture nodes
The node drawer is divided into two functional sections:
Setup: Define the inputs, parameter scope, and escalation thresholds.
Configurations: Fine-tune the "Communication Layer" and Agentic response behavior.
User Input
List of parameters
A parameter that stores the caller's response and serves as the primary input for the Agentic Capture node.
Current Intent
List of options
The active goal for the node; critical for detecting if a caller switches topics mid-flow.
Agentic Parameters
List of options
The list of specific parameters this node is tasked with filling.
Maximum Number of Attempts
1-3
The maximum number of attempts the node makes to capture a parameter during a live conversation. If the number is exceeded, the node triggers the Escalation path. Every parameter has its own Maximum Number of Attempts. The default value is 3.
Reconfirm the captured parameter value with the end user
Selected/ unselected
Whether the agent should repeat the captured value back to the caller. You can use this option to ensure accuracy against potential ASR (Automatic Speech Recognition) transcription errors.
Agentic Response
List of parameters
This parameter stores the response from Agentic Capture and is useful when the parameter capture task is ongoing or an escalation scenario is detected.
Escalation Parameter
List of parameters
This stores the parameter name which caused the escalation path to be triggered in Agentic Capture node.
Escalation Reason
List of options
A parameter that stores why an escalation was triggered. The available reasons are:
“ATTEMPTS_EXCEEDED”: the number of attempts exceeded the defined maximum
“USER_DENIAL”: a caller denied to provide a valid response
“USER_INFORMATION_GAP”: a caller either did not have the required information, cannot find it, or needs clarification about what is being asked
“HUMAN_HANDOFF”: a caller explicitly requests a human, support agent, or other channel
“USER_INITIATED_TERMINATION”: a caller expresses emotional frustration, dissatisfaction, or explicitly indicates a desire to stop or end the conversation
Communication Layer
The Communication Layer is a permanent background flow of Speak and Listen nodes that manages the autonomous conversation loop. It automatically delivers the agent's responses and captures user input at each stage of the interaction.
Always Active: To ensure consistent interaction, the Communication Layer cannot be disabled.
Fixed Audio Settings: The Speak node settings within this layer are pre-configured and cannot be modified.
Within the Agentic Capture node configuration, you can adjust the following Listen node settings for this layer:
Communication Layer
Detect Slience
0.4-5 seconds
The amount of time the system waits after the caller stops speaking to determine whether the parameter is filled and to continue the flow. The default value is 0.4 seconds.
No input
1-60 seconds
The amount of time the agent waits for a caller's input. If the time limit is exceeded, the agent triggers the retry logic until it reaches the last retry, then moves to the No Input flow. The default value is 10 seconds.
Context keywords
Text
Context keywords improve recognition quality if certain words are expected from the caller.
Enable Node Noise Sensitivty
On/Off 10-100
If no value is provided here, the agent-level sensitivity settings will also apply to the node. When a value is provided for the Node noise sensitivity, these settings will override the agent-level sensitivity settings The default value is 40.
Other Configurations
Waiting Time
3-10 seconds
The maximum time to wait for the NLU engine. The default value is 5 seconds.
Agentic Response Guidelines
Text
You can provide natural language instructions to guide how the LLM crafts its replies. This field operates similarly to the Response Guidelines field in the Q&A node.
Agentic Capture node exit paths
User input: This path is triggered once all assigned parameters within the node have been successfully captured.
Intent Change: This is triggered when an intent change is detected within the caller's response.
Escalation: This is triggered when an escalation scenario is detected based on the caller's response.
Failed: This is triggered when the node fails to run due to an internal error.
Pro tip
If the Failed exit path is triggered frequently, it likely indicates an engine timeout. To resolve this, increase the Waiting time from the default 5 seconds up to a maximum of 10 seconds.
Reports: Call logs

Every conversation is logged and visible under Reports. Call logs show input, detected intent, captured parameters, and, if capture failed, the reasons for escalation. You can use call logs to improve coverage and adjust flows.
The Call log modal consists of the following elements:
The session information section at the top summarizes essential details of the call:
The name of the virtual agent that handled the call.
Event: The type of event that occurred, either Inbound or Outbound.
Caller phone: The caller's telephone number.
Agent phone: The agent's telephone number.
Date&Time: The time zone, date, and time of the interaction.
Session ID: The ID of the session.
The Transcript tab contains the entire transcript of the agent-caller conversation, clearly indicating who spoke when.
The Parameters tab displays all the parameters captured by the agent. You can use this tab to see exactly how the input was processed and where it succeeded or failed.
The Flow Path tab displays a complete trail of every node triggered during a call. Additionally, it exposes the underlying Communication Layer to ensure full transparency:
It displays a count of the total number of nodes used from the Communication Layer across all conversation loops.
Every node activated throughout the session is documented, showing the specific order of the "Perceive-Plan-Act" cycle.
You can access the Input and Output details for each node within the loop by selecting the "Show More" option.
It provides the status code for every triggered node, confirming successful execution at each step.

To access call logs, perform the following steps:
Navigate to Reports in the black ribbon at the top of the screen.
Select Reports from the dropdown menu.
In the Generate Report section, select Call Log as the Report type.
The session list updates automatically. Click the session ID of the session you wish to inspect.
Migration and Compatibility
To duplicate your Agentic NLU agent, perform the following steps:
Find the agent you want to duplicate.
Click More Actions (⋮) and select Duplicate.
Edit the fields available in the Duplicate Agent modal.
Click Duplicate Agent to save your new VA.
To import an Agentic NLU agent, perform the following steps:
Click Import Agent on the right-hand side.
Upload or drag and drop the zip file with your agent.
The fields in the Import Agent modal autopopulate with the agent's information. You can check and edit them if required.
Click Import Agent to save your new VA.
Last updated
Was this helpful?