Agentic NLU AI engine

⚠️ This feature is in public beta. You're encouraged to explore and share feedback. It’s production-ready, but may change as we improve based on real-world use.

Introducing Agentic NLU: Vonage’s most advanced AI engine yet

Powered by cutting-edge NLU and autonomous agents, Vonage's Agentic NLU AI engine enables virtual assistants to understand context, make decisions, and take action, turning static scripts into intelligent, goal-driven conversations.

circle-exclamation

How an AI Agent works

To fully leverage Agentic NLU, it is essential to understand that an AI Agent operates in a continuous loop of perception, planning, and execution.

1

Perception

The agent observes the user’s input.

Example input: "I want to fly to London tomorrow."

2

Goal identification

The agent identifies the user's specific objective based on the input received.

Goal: Book a flight to London for tomorrow.

3

State evaluation

The agent assesses what it knows vs. what it needs.

Current State: Missing departure time.

4

Planning

The agent generates a sequence of steps to reach the goal.

Steps: Ask for time → Search flights → Offer options → Confirm booking

5

Action selection

The agent selects the next best move based on the plan (e.g., asking a question or calling an API).

Action: “What time would you like to depart tomorrow?”

6

Execution

The agent acts and updates the state.

User response: “Evening.”

7

Goal check

The agent checks if the goal is met.

  • If the goal is met, the agent marks the goal as completed.

  • If the goal is not met, the agent loops back to Step 4 to refine the plan.

Why Agentic NLU is unique

circle-check

Agentic NLU allows virtual agents (VAs) to engage in dynamic, natural interactions without relying on predefined flows. It is uniquely designed to provide human-like intelligence at every touchpoint while maintaining the reliability of deterministic actions.

For the VA builders, Agentic NLU represents a shift from explicit logic mapping to objective-oriented configuration:

  • De-cluttered flow design: A single Agentic Capture node can replace multiple Collect Input nodes. Additionally, Agentic nodes have the entire communication layer (Speak & Listen) preconfigured and running in the backend.

  • Low training overhead: Define intents with a simple description and a few examples rather than hundreds of user expressions.

  • Instant configuration: Add or update intents and parameters instantly using natural language instructions (LLM prompts), drastically reducing manual training data requirements.

  • Robust voice handling: Built-in fuzzy matching automatically corrects Automatic Speech Recognition (ASR) transcription errors, preventing unnecessary call failures.

  • Pre-configured governance: Each Agentic node includes built-in system instructions, goals, safety guardrails, and memory configurations.

  • End-to-End Agentic Architecture: The Agentic NLU Template enables you to route 100% of callers' conversations via AI Agents, providing human-like intelligence at every stage of the interaction.

Agentic NLU limitations

  • Available for voice agents and English language only.

  • An Agentic Response is automatically generated by LLMs and cannot be manually defined by Studio users.

  • Retry behavior and reconfirmation behavior are predefined and cannot be altered manually.

  • When using Agentic NLU, AI engine selection becomes fixed and cannot be changed.

  • Agentic NLU uses a different set of components and architecture from Hybrid NLU or Traditional NLU, preventing cross-engine migration.

    • Once an agent is created with Agentic NLU, it cannot be switched to another NLU engine.

    • Agents built with the Hybrid NLU or Traditional NLU engines cannot be imported or duplicated into Agentic NLU.

    • Similarly, agents built with Agentic NLU cannot be converted to Hybrid NLU or Traditional NLU.


Understanding the architecture: Agentic nodes

Agentic NLU relies on two specialized AI Agents implemented as Agentic nodes. These nodes structure how conversations are handled, ensuring that every caller's input is evaluated and routed appropriately through a continuous reasoning loop, enabling fluid, goal-driven interactions.

  1. Agentic Classification node (The Hub).

As the Hub, this node serves as the assistant's central brain. It identifies the caller's intent or topic at each stage of the conversation and routes the flow accordingly. It includes key functionalities such as:

  • Intent recognition: Classifies the caller's input against Agentic Intents to determine the most relevant intent or topic.

  • Global Intent support: Recognizes and processes global intents (e.g., “Cancel,” “Talk to agent”) when configured.

  • LLM-powered matching: Employs large language models (LLMs) for prompt-based classification, requiring minimal training data.

  • Outcome handling: Delivers one of three outcomes, Success, Missed, or Failed, based on the classification result.

  • Intent storage & Agentic fallbacks: Records the detected intent for downstream use. If no match is found, it automatically generates an informed response suggesting available intent options.

  • Isolated testing: Features a Save and Test option for standalone classification performance testing during configuration.

  1. Agentic Capture node (The Branch).

As a Branch, this node functions as the specialist. It is triggered by the Hub to gather the specific data points (parameters) needed to fulfill a chosen intent. It includes key functionalities such as:

  • Intent tracking: Continuously monitors the active intent and recognizes context changes (e.g., switching to a new intent during a conversation).

  • Multi-parameter capture: Acquires multiple parameters within a single process when required to complete an intent.

  • Retry handling: Facilitates parameter retries to ensure accurate capture of values in the desired format.

  • Reconfirmation support: Validates captured parameter values with callers, particularly valuable in voice interactions where ASR errors occur frequently.

  • Agentic responses: Automatically produces contextual replies to address incomplete or incorrect caller's inputs, maintaining conversation fluency.

  • Proactive escalation detection: Detects potential escalation scenarios during interactions and pinpoints the specific parameter that triggers them. Escalation is managed automatically — no additional setup needed.

The conversation remains within the Agentic Capture node (the Branch) until the task is complete or an escalation is triggered. If the AI Agent detects a topic shift, it triggers an “Intent Change” scenario. This seamlessly routes the caller back to the Agentic Classification node (the Hub) to re-evaluate the request and re-orient the flow.


Getting started

This guide provides step-by-step instructions for building, training, and optimizing Virtual Agents using the new Agentic NLU AI engine.

1

Choose the Right Agent Type

Selecting the Telephony agent type to begin the creation process.

When creating an agent, you are prompted to select an Agent Type. To access the features described on this page, choose Telephony as your Agent Type.

circle-exclamation
2

Choose the Right AI Engine

Choosing the Agentic NLU engine to enable autonomous decision-making and context-based goals.

When prompted to select an AI engine, select Agentic NLU.

triangle-exclamation
3

Fill in the Details page

The following fields are available for configuration:

Field
Options
Description

Region

List of available options

Select the geographical data center where your agent’s data and traffic will be processed. Available options are the United States and Europe.

Agent Name

Text

A unique internal label used to identify the agent within your dashboard.

Agent Description (Optional)

Text

A summary of the agent’s use case. Provides context to the AI engine to improve goal alignment and serves as an internal reference.

API Key

List of available options

Select the specific Vonage API Key that will be used to authenticate and bill the agent’s activity.

Language

List of available options

Define the primary language the agent will use to communicate and process NLU intents. Currently, only English (United States) is available

Voices

List of available options

Select the Text-to-Speech (TTS) voice profile and accent that best fit your brand's persona.

Time Zone

List of available options

Set the agent's local time zone, which dictates how time-based conditions and scheduling logic are executed.

You can view and edit these details later.

circle-exclamation
4

Choose the right VA template

The Choose Template screen offers the option to start from a blank canvas or use a preconfigured Agentic NLU Template.

You can create an Agentic NLU agent using two different approaches:

  • Agentic NLU Template: AI Studio provides a ready-made template that pre-configures the Agentic Classification and Agentic Capture nodes in an optimal "hub-and-branch" model. This is the fastest way to maximize the benefits of the Agentic NLU engine with a proven architectural structure.

  • Start from Scratch: For full control over your conversation logic, you can start from a blank canvas and manually add Agentic nodes from the node canvas. This allows you to integrate Agentic reasoning into existing rule-based workflows or create unique hybrid architectures.

5

Choose the right type of Event

To utilize Agentic NLU, you must select either Inbound Call or Outbound Call as your triggering event:

  • Inbound Call: The most common trigger; the agent activates immediately when a user calls the assigned number.

  • Outbound Call: Enables the agent to dial a user’s number to start a session proactively.

  • ​End Call: A post-session trigger that allows the agent to continue specific background tasks or data processing after the main call has ended.

  • ​API Event: Uses 3rd-party integrations to trigger a conversation based on external data or system prompts.

circle-exclamation
6

Click Create to finalize the setup

Once you click Create, you are redirected to the Agent canvas. This is your primary workspace for building and managing the agent's logic.


Designing the flow: Agentic intents, parameters, and nodes

Designing an Agentic NLU flow differs from Traditional or Hybrid NLU flows. Instead of mapping every possible scenario, you provide the agent with the "ingredients" (Agentic Intents and Parameters) and use Agentic nodes to manage the conversation autonomously.

In a typical Agentic flow, all conversation logic is driven by the interaction between these elements:

  1. Preparation: Before placing nodes, define your Agentic Intents (to establish context) and Parameters (to identify the data that needs to be collected).

  2. Routing (Classification): Start your flow with an Agentic Classification node. This acts as the "hub," evaluating user input against your defined intents to determine the next step.

  3. Fulfillment (Capture): Link identified intents to Agentic Capture nodes. These "specialists" handle multi-turn collection and re-confirmation, ensuring all required parameters are accurately gathered.

  4. Path Management: Configure all exit paths from these nodes. You must specifically account for Intent Changes (re-routing), Escalation (human handoff), and Failed outcomes.

circle-check
1

Add an Agentic Intents

Agentic Intents define the goals your assistant can recognize. Each intent includes:

  • A name (short and descriptive)

  • A natural-language description of the intent (similar to an LLM prompt)

You can define Agentic Intents in the Properties panel of your AI Studio agent. Once created, they can be reused across the agent’s flow.

circle-check
Example of the Agentic Intents, where names and descriptions are defined to guide the agent's classification logic.
2

Add Agentic Parameters

Agentic Parameters describe the information your Virtual Assistant needs to collect. Because they do not rely on entities, you can use natural language instructions instead of maintaining complex entity lists.

You can define Agentic Parameters in the Properties panel of your AI Studio agent. Once created, they can be reused across the agent’s flow.

There are three types of these parameters: Custom, System, and User parameters.

Parameter Type
Description
Validity
Editable?
Examples

Custom

Manually created by builders to store session-specific data.

Session-only: Reset once the interaction ends.

Yes: Name and Value.

Booking_ID

System

Automatically generated for every Virtual Agent.

Session-only: Reset once the interaction ends.

No.

Agent_ID

User

Used for data that remains constant across multiple interactions.

Cross-Session: Follows the caller across different VAs in the same account.

Yes: Name and Value.

User_ID, User_Language_Preference

circle-info

Conditions node: Parameter logic

The Conditions node supports both Custom and User parameters, applying qualitative and quantitative logic across all data types.

Type compatibility:

  • Quantitative Logic: Use Number, Date, Time, or DateTime for mathematical or chronological comparisons (e.g., greater than or before).

  • Qualitative Logic: The String type is ideal for text-based matching but does not support quantitative operations.

Setting up Custom parameters

Name
Options
Description

Options

Text

A name for your parameter.

Type

List of available options

Defines the data format the system expects to store. Available options are String, Number , DateTime, Date, Time.

Description

Text

Explains the parameter's role to the AI. This provides general context so the agent understands the "job" of this parameter.

Format

Text

The strict logic the AI uses to extract and validate input. This is where you define exactly how the parameter value must be structured.

Value

Text

Allows you to specify a pre-defined value of the parameter.

circle-info

Date, Time, and DateTime formats

These parameter types are automatically stored in standardized ISO formats:

  • Date: YYYY-MM-DDT00:00:00±HH:MM

  • Time: 1970-01-01THH:MM:SS±HH:MM

  • DateTime: YYYY-MM-DDTHH:MM:SS±HH:MM

Note: When selecting Date, Time, DateTime, or Number, the Format field locks automatically because these types rely on fixed system logic. The Description field remains available for providing context to the AI.

  • Focus on the Role: Tell the AI what the information is, not how to get it.

    • Example: "This is the caller's 8-digit bank account number used for identity verification."

  • Keep it Concise: Limit this to 1–2 lines.

  • Avoid Logic: Do not define extraction rules here; the AI will ignore them in this field. Use the Format field for rules.

circle-check
Example of a Booking ID configuration, differentiating the Description (context) from the Format (logic).

System Parameters

chevron-rightAvailable System Parametershashtag

Below is the list of System parameters available in the Agentic NLU AI engine:

Name
Type
Description

CALLER_PHONE_NUMBER

String

The caller's telephone number.

CALL_START_DATE

DateTime

The date of the call.

CALL_START_TIME

DateTime

The time when the agent picked up the call.

CALL_DIRECTION

String

Categorizes the interaction as 'Inbound' or 'Outbound'. Used to enforce call-type restrictions and ensure the agent only executes tasks relevant to the current call direction.

CALL_TRANSCRIPTION

String

Stores the transcription of the interaction.

TRIGGERED_BY_SESSION_ID

String

Identifies the specific session that initiated or "triggered" the current interaction.

VAPI_CALL_ID

String

The Call ID from the Voice API in the dashboard. This allows you to match calls between the Studio log and the Voice API dashboard log.

SESSION_ID

String

A sequence of numbers and letters to identify the specific session.

AGENT_ID

String

The agent's ID.

CONVERSATION_ID

String

The Conversation ID/UUID sent from Vonage API.

AGENT_PHONE_NUMBER

String

The agent's virtual phone number.

CALL_START_DATETIME

DateTime

The date and time that the interaction started.

User Parameters

There are two preconfigured user parameters available in AI Studio:

  • Account_Name for storing the business account name related to the call.

  • Phone_Number for storing the caller's phone number.

circle-exclamation
3

Add an Agentic Classification node

The node drawer is divided into three functional sections:

  • Setup: Define the inputs, the scope of intents, and escalation thresholds.

  • Configuration: Fine-tune the Communication Layer and LLM response behavior.

  • Test: Perform isolated testing of your classification logic.

Field
Options
Description

User Input

List of parameters

A parameter that stores the caller's response and serves as the primary input for the Agentic Classification node.

Agentic Intents

List of options

A list of intents in scope for this node. The node will only search for matches within this list.

Maximum Number of Attempts

1-6

The maximum number of attempts the node makes to identify an intent during a live conversation. If the number is exceeded, the node triggers the Escalation path. The default value is 3.

Detected Intent

List of parameters

A parameter that stores the intent identified by the engine.

Agentic Response

List of parameters

A parameter that stores the response generated by the engine, particularly useful when no intent is initially detected.

Escalation Reason

List of options

A parameter that stores why an escalation was triggered. The available reasons are:

  • “ATTEMPTS_EXCEEDED”: the number of attempts exceeded the defined maximum

  • “USER_DENIAL”: a caller denied to provide a valid response

  • “USER_INFORMATION_GAP”: a caller either did not have the required information, cannot find it, or needs clarification about what is being asked

  • “HUMAN_HANDOFF”: a caller explicitly requests a human, support agent, or other channel

  • “USER_INITIATED_TERMINATION”: a caller expresses emotional frustration, dissatisfaction, or explicitly indicates a desire to stop or end the conversation

Agentic Classification node exit paths:

  • Intent: This path is triggered when the Agentic Classification node successfully detects an intent. Every intent selected under the Agentic Intents of an Agentic Classification node will have a unique exit path.

  • Escalation: This path is triggered when an escalation scenario is detected based on the caller's response.

  • Failed: This path is triggered when the node fails to run due to an internal error.

circle-check
4

Add Agentic Capture nodes

The node drawer is divided into two functional sections:

  • Setup: Define the inputs, parameter scope, and escalation thresholds.

  • Configurations: Fine-tune the "Communication Layer" and Agentic response behavior.

Field
Options
Descriptions

User Input

List of parameters

A parameter that stores the caller's response and serves as the primary input for the Agentic Capture node.

Current Intent

List of options

The active goal for the node; critical for detecting if a caller switches topics mid-flow.

Agentic Parameters

List of options

The list of specific parameters this node is tasked with filling.

Maximum Number of Attempts

1-3

The maximum number of attempts the node makes to capture a parameter during a live conversation. If the number is exceeded, the node triggers the Escalation path. Every parameter has its own Maximum Number of Attempts. The default value is 3.

Reconfirm the captured parameter value with the end user

Selected/ unselected

Whether the agent should repeat the captured value back to the caller. You can use this option to ensure accuracy against potential ASR (Automatic Speech Recognition) transcription errors.

Agentic Response

List of parameters

This parameter stores the response from Agentic Capture and is useful when the parameter capture task is ongoing or an escalation scenario is detected.

Escalation Parameter

List of parameters

This stores the parameter name which caused the escalation path to be triggered in Agentic Capture node.

Escalation Reason

List of options

A parameter that stores why an escalation was triggered. The available reasons are:

  • “ATTEMPTS_EXCEEDED”: the number of attempts exceeded the defined maximum

  • “USER_DENIAL”: a caller denied to provide a valid response

  • “USER_INFORMATION_GAP”: a caller either did not have the required information, cannot find it, or needs clarification about what is being asked

  • “HUMAN_HANDOFF”: a caller explicitly requests a human, support agent, or other channel

  • “USER_INITIATED_TERMINATION”: a caller expresses emotional frustration, dissatisfaction, or explicitly indicates a desire to stop or end the conversation

Agentic Capture node exit paths

  • User input: This path is triggered once all assigned parameters within the node have been successfully captured.

  • Intent Change: This is triggered when an intent change is detected within the caller's response.

  • Escalation: This is triggered when an escalation scenario is detected based on the caller's response.

  • Failed: This is triggered when the node fails to run due to an internal error.

circle-check

Reports: Call logs

Every conversation is logged and visible under Reports. Call logs show input, detected intent, captured parameters, and, if capture failed, the reasons for escalation. You can use call logs to improve coverage and adjust flows.

The Call log modal consists of the following elements:

The session information section at the top summarizes essential details of the call:

  • The name of the virtual agent that handled the call.

  • Event: The type of event that occurred, either Inbound or Outbound.

  • Caller phone: The caller's telephone number.

  • Agent phone: The agent's telephone number.

  • Date&Time: The time zone, date, and time of the interaction.

  • Session ID: The ID of the session.

To access call logs, perform the following steps:

  1. Navigate to Reports in the black ribbon at the top of the screen.

  2. Select Reports from the dropdown menu.

  3. In the Generate Report section, select Call Log as the Report type.

  4. The session list updates automatically. Click the session ID of the session you wish to inspect.


Migration and Compatibility

To duplicate your Agentic NLU agent, perform the following steps:

  1. Find the agent you want to duplicate.

  2. Click More Actions (⋮) and select Duplicate.

  3. Edit the fields available in the Duplicate Agent modal.

  4. Click Duplicate Agent to save your new VA.

To import an Agentic NLU agent, perform the following steps:

  1. Click Import Agent on the right-hand side.

  2. Upload or drag and drop the zip file with your agent.

  3. The fields in the Import Agent modal autopopulate with the agent's information. You can check and edit them if required.

  4. Click Import Agent to save your new VA.

Last updated

Was this helpful?