Our Virtual Assistants can be offered standalone or as part of a bundle with other products from the Vonage product catalogs, such as the Vonage Contact Center.
We currently fully support English, Hebrew, German, and soon Spanish. In an Alpha version, we also have Dutch, Arabic, French, and Portuguese available. A language in Alpha version means it might not include all NLU capabilities or e.g. system entities. Please have a look at this list to see the status of each language.
You can let us know about your feature idea directly on the AI Studio by clicking on the question mark button on the top right.
For HTTP agents please keep in mind, that we are not offering a client-facing interface to deploy the agent (e.g., webchat widget). You can create an agent on our platform and use our API to communicate with the agent over any text channel.
Using our API you can easily create your own outbound call campaigns. Just create an agent on our platform, select "Outbound Event" for the agent conversation type, and use our call API to set it up.
There are two parts to the Studio AI - the ASR (Automatic Speech Recognition) and NLU or NLP (Natural Language Understanding and Processing). The ASR registers the user input and transforms speech into text. This text then is analyzed by the NLU component that will try to classify the input into the correct place of the agent's knowledge base.
You can either choose from a number of different robotic voices based on the language you chose, OR add your own voice recordings to the agent.
When you create the agent, you have the chance to pick the right voice for you. If you prefer human voice recordings, you can add them to any node in the flow.
Yes, the virtual assistant will be able to recognize answering machines. Post recognition, the call is immediately terminated by the platform.
Tags can help determine e.g. how many calls were successful and how many failed. In the Reports, you can then filter the calls by tags. This will help you to clearly measure the performance of the Agent. How to add a tag to a point in the conversation, please click here.
All information provided by our Insights API (transcriptions, recordings, general info) is automatically wiped after 30 days (this retention will be increased to 90 days in the next few weeks), in order to maintain privacy and compliance with GDPR and other privacy regulations.