Smart Conversations AI Engine

Agent-Based AI Framework Powering Smart Conversations 🌟

Introducing Smart Conversations: Vonage’s most advanced AI engine yet 🚀

Powered by cutting-edge NLU and autonomous agents, Vonage's Smart Conversations AI engine enables virtual assistants to understand context, make decisions, and take action, turning static scripts into intelligent, goal-driven conversations.

Did you know? đź’ˇ

While Vonage still supports the Traditional NLU engine, we recommend using either Hybrid NLU or Smart Conversations for more advanced, flexible, and context-aware experiences. These newer engines are designed to better handle complex user intents and dynamic interactions. đź’ˇ

Learn more here.

To fully leverage Smart Conversations, it’s essential to understand how AI Agents operate — how they perceive, reason, and act autonomously.

How an AI Agent works

1

Perception

The agent observes the current state of the world/environment.

Example: The user says "I want to fly to London tomorrow."

2

Goal Identification

The agent determines what the user (or itself) is trying to achieve.

  • Goal: Book a flight to London for tomorrow.

3

State Evaluation

The agent assesses the current state vs the goal state.

  • Current: No flight booked.

  • Goal: Confirmed flight to London on the specified date.

4

Planning

It generates a plan (a sequence of actions) to reach the goal.

  • Steps: Ask for time → Search flights → Offer options → Confirm booking

5

Action Selection

The agent selects the next best action based on the plan. These actions could be in the form of getting more information from the end-user or executing 3rd party API calls

  • Action: “What time would you like to depart tomorrow?”

6

Execution

It acts and updates the state.

  • User responds: “Evening.”

7

Goal Check / Repeat

Agent checks: Have I reached the goal?

  • If not, go back to Step 4 (re-plan or continue).

  • If yes, mark the goal as completed.


Overview

đź’ˇ Smart Conversations uses two templatized AI Agents within each Virtual Agent, ensuring all interactions between the VA and end users are routed through one of these agents. This approach enhances both the VA development process and the user experience during conversations.

The two templatized AI Agents are available in the form of two Smart Nodes:

  • Smart Classification Node: Identifies the intent or the topic that the end user wants to discuss.

  • Smart Capture Node: Collects the parameter details needed to fulfill the end user's intent.

Why use Smart Conversations? âś…

Use Smart Conversations when:

  • You want to reduce VA design time by using LLM prompts for defining intents and parameters.

  • Allow for greater adaptability if end users may give multiple pieces of information at once, or change direction mid-flow.

  • You are building for voice and need robust handling of free-form input.

Pro Tip 🔥

Heads Up! ⚠️


How do Smart Conversations work?

đź’ˇ Smart Conversations relies on two specialized AI agents - called Smart Nodes - that structure how conversations are handled:

  • Smart Classification acts as the central hub of the assistant. It identifies the user’s intent at each stage of the conversation.

  • Smart Capture functions as the branches. Each node gathers the specific information (parameters) needed to fulfill a given intent.

How This Works in Your Conversation Flow 🚀

In a typical flow, one Smart Classification node is connected to multiple Smart Capture nodes, each associated with a different intent.

These nodes work together in a continuous conversation loop: classification leads to capture, which may return to classification if a new intent is detected.

This architecture ensures that every user input is evaluated and routed appropriately, enabling fluid, goal-driven interactions without the need for traditional handoffs or scripted transitions.

The loop continues until the assistant either reaches the end of the interaction or detects an escalation scenario, such as repeated failure to collect input or a user request to speak to an agent.

Smart Classification node

The Smart Classification node detects the intent or topic the end user wants to discuss. It performs intent recognition using the Smart Intents you define in the Properties panel.

Key Behaviors:

  • Intent Recognition: Classifies user input against Smart Intents to identify the most relevant intent or topic.

  • Global Intent Support: Recognizes and handles global intents (e.g., “Cancel,” “Talk to agent”) when configured.

  • LLM-Powered Matching: Leverages prompt-based classification via large language models (LLMs), requiring minimal training data.

  • Outcome Handling: Returns one of three outcomes—Success, Missed, or Failed—based on the classification result.

  • Intent Storage & Smart Fallbacks: Stores the detected intent for downstream use. If no match is found, it automatically generates a smart response to inform the user of the available intent options.

  • Isolated Testing: Includes a Save and Test feature for standalone testing of classification behavior during configuration.

Smart Capture node

Collects the parameter details needed to fulfill the end user's intent.

Key Behaviors:

  • Intent Tracking: Monitors the current intent and detects context switches (e.g., when the user shifts to a different intent mid-conversation).

  • Multi-Parameter Capture: Collects multiple parameters in a single flow when needed to complete an intent.

  • Retry Handling: Supports parameter retries to ensure values are captured in the correct format or structure.

  • Reconfirmation Support: Reconfirms captured parameter values with the user—especially useful in voice channels where ASR errors are common.

  • Smart Responses: Automatically generates contextual responses when user input is incomplete or invalid, keeping conversations smooth and on track.

  • Proactive Escalation Detection: Identifies potential escalation scenarios mid-conversation, including the specific parameter causing the escalation.

Heads Up! ⚠️


How to get started

This guide provides step-by-step instructions for building, training, and optimizing Virtual Agents using the new Smart Conversations AI engine.

1

Choose the Right AI Engine

Upon creating an agent, you will see an AI Engine dropdown with three options:

  • Smart Conversations 🚀

  • Hybrid NLU

  • Traditional NLU

Once you select the Smart Conversations AI engine, you can provide an optional Agent Description (max limit of 2000 characters) to highlight the VA’s objectives, which will act as context and help improve VA performance. Don’t worry, you can edit the description of your agent later on as well.

Heads Up! ⚠️

2

Choose the Right VA Template

AI Studio provides a ready-made “Smart Conversations” template, which uses both the Smart Classification node and Smart Captures nodes in an optimal hub-branch model in order to maximize the benefits of the Smart Conversations AI engine.

3

Add Intents and Parameters

Smart Conversations replaces a Hybrid NLU/ Traditional NLU setup with a simplified approach. Smart Intents and Smart Parameters are defined in the Properties panel of your AI Studio agent. Once created, they can be reused across the agent’s flow.

đź’ˇ Smart Intents

Smart Intents define the goals your assistant can recognize. Each intent includes:

  • A name (short and descriptive)

  • A natural-language description of the intent (similar to an LLM prompt)

Keep your descriptions clear and follow the best practices of LLM prompting. This improves the assistant’s ability to classify user input correctly.

Pro Tip 🔥

đź’ˇ Smart Parameters

Smart Parameters describe the information your assistant needs to collect.

In the “Smart Parameters” modal, there will be three types of parameters: “Session”, “System”, and “Users”.

Parameter type
Description
Validity
Edit permission
Examples

Need to be created by Studio users manually within a VA. These parameters are great for data used throughout a session.

Would retain their values for one session only (i.e after the session ends, the parameter values are forgotten).

Parameter name,

Parameter value

Booking_ID

“System parameters” (same as Hybrid NLU)

These parameters are automatically created for all VAs.

Would retain their values for one session only (i.e after the session ends, the parameter values are forgotten).

These parameters cannot be edited/ deleted by Studio users

Agent_ID, Caller_Number

“User parameters” (same as Hybrid NLU)

These parameters are great for storing data that doesn't change very often.

Example scenario: If an end user speaks with your VA on Monday and Wednesday, any end user parameters set on Monday will still be accessible on Wednesday.

These parameters follow a user across sessions with the same VA or different VA under a given Studio account.

Parameter name,

Parameter value

User_ID, User_Language_Preference

How to set up your smart parameters

✔️ Session Parameters

You define:

  • A name for the parameter

  • Type: String, Number or DateTime

  • Description: explains the parameter role within the VA

  • Format: list of rules & criteria that the end user’s input must meet for the parameter to be considered valid

  • Value: User can specify a single or multiple pre-defined value(s) if required

  • Multivalue: Optional field which allows a parameter to store more than one value

Heads Up! ⚠️

✔️ System Parameters

System parameters exist by default for all VAs, but their values are automatically populated and retained for one session only. Below is the list of System parameters in the Smart Conversations AI engine.

✔️ User Parameters

User parameters follow an end user across sessions and agents running in a given AI Studio account. They are used for storing end-user data that does not change very often.

Heads Up! ⚠️

4

Build your flow using Smart Classification and Smart Capture nodes

When designing with Smart Conversations, all conversation logic is driven through Smart Classification and Smart Capture nodes.

  • Start the flow with a Smart Classification node to detect user intent. Use Smart Capture nodes to collect any required information linked to that intent.

  • Smart Capture supports multi-turn collection, re-confirmation, and can trigger escalation if input is not usable.

Handle all exit paths from these nodes—especially Intent Change, Escalation, and Failed.

Pro Tip 🔥


Logs and Testing

Every conversation is logged and visible under Reports > Report type = Call Log > Choose Session. Logs show:

  • Input and detected intent

  • Captured parameters

  • Escalation reasons if capture fails

Use logs to improve coverage and adjust flows.

Pro Tip 🔥


⚠️ Limitations

  • Available for voice agents only

  • A Smart Response is automatically generated by LLMs and cannot be manually defined by Studio users

  • Retry behavior and reconfirmation behavior are pre-defined and cannot be altered manually


Migration and Compatibility

Smart Conversations uses a different set of components and architecture than Hybrid NLU or Traditional NLU, which prevents cross-engine migration.

  • Once an agent is created with Smart Conversations, it cannot be switched to another NLU engine.

  • Agents built with the Hybrid NLU or Traditional NLU engines cannot be imported or duplicated into Smart Conversations.

  • Similarly, agents built with Smart Conversations cannot be converted to Hybrid NLU or Traditional NLU.

Duplicating or Importing a Smart Conversations agent

Last updated

Was this helpful?