Use Case Example: Gym Business

Introducing the GenerativeAI node in Messaging

Virtual communication is significant in telecommunications, chatbots, and virtual assistants. This has been possible thanks also to the power of LLM (Large Language Models), which has taken over the market and has already been incorporated into the AI Studio product roadmap.

Generative AI Node in communication channels

With the newly added GenAI node, AI Studio applies the capabilities of the Large Language Model (LLM) by not only adding GPT’s unique conversational abilities but also solving a central pain point- time and effort needed to create a virtual agent. and Utilizing the GenAI node in any flow speeds up knowledge base creation significantly, by easing the identification of intents and reducing the need to create them manually. This way, minimal training is needed to quickly create an efficient virtual agent.

As a result of GPT’s remarkable abilities, virtual agents can now engage users in a more natural, human-like way. GenerativeAI node can understand the context of a given text and provide pertinent answers.

Prerequisites

To access the Generative AI node, you must register for a paid account directly with OpenAI. Sign up with OpenAI by clicking this link and creating an account.

To get started on the AI Studio, create a Vonage Developer Account. Link here.

Setting up Your OpenAI Account for AI Studio

You must update your AI Studio account with the most recent instructions for integrating generative AI to follow this course. Follow these instructions.

FAQs in AI Studio with Generative AI Node

I was recently introduced to Generative AI Node in AI Studio. I built a Virtual Agent using the GenerativeAI node for a wellness and fitness service that allows customers to answer FAQs. I’ve recently become a pilates lover and I’ve been looking for my ideal gym center around my city. While searching I thought that the FAQs for a gym service could be a great example to explore for my use case. I hate spending time waiting for an agent to answer If I only want to know basic information such as If there’s a changing room or where I can leave my personal belongings.

By integrating the GenAI node with SMS for FAQs, users can text their questions, and OpenAI’sGPT will generate informative and tailored responses.

But I didn’t stop there. I wanted to also play around and automate processes like signing up for a first lesson and knowing more about courses so I went ahead and created those additional flows to the FAQs handled by the GenerativeAI node.

Let’s kick off the conversation!

We’ll begin by adding nodes, starting with Collect Input followed by the GenAI node.

When adding the Collect Input node, we introduce ourselves to the user. For example by saying “Hi! Welcome to Gym Halo, How can I help you today?” Of course, we have to save and exit.

Most importantly, the start node has to be connected with the first node, the collect input node.

How to Set the Generative AI Node

Let’s go on to the GenAI node. We have now added the GenerativeAI node to our conversation flow. In Phase 2, in addition to the existing knowledge base, the node allows us to add intents and actions directly into it. There’s no more need to add the classification node to the flow because now we can add the actions and intent within the GenAI Node.

The agent requires very little training and can automatically generate responses based on the knowledge base and actions set. Additionally, the GenerativeAI node is capable of handling multiple intents and keeping the content while answering multiple questions.

Here, we need to insert the user input parameter named “input”. Feel free to choose your name, it doesn’t have any effect on its functionality. If this parameter doesn’t already exist, we can create one by clicking on “create parameter”.

To understand and capture the input of the user we need parameters. Parameters help the agent to collect a user’s information so this way, by defining it, it will be easy to detect information from the user’s input. Parameters are in many nodes e.g. in the Collect Input, Classification, and the Generative AI node.

The input will convey the user’s intent and help the node to understand the user’s context. We specify the parameter’s Name (input) and the Entity, naming it @says.any. Returning to our parameter settings, we select the one just created “input”.

Moving forward, we insert the Company Name, which we designate as “Gym Halo”.

Let’s add the text for our knowledge base around my gym-centered FAQs. What you enter in the Knowledge Base will define the boundaries your Virtual Assistant will be able to perform within. More specifically, It’s a description of what you want the VA to be able to answer. This is what I put in:

We must now fill in the Output Parameter. The output parameter stores responses generated by the text entered in the knowledge base and pieces of information generated by the GenAI node based on the provided input.

Moving to the Actions section, the node can now handle more complex flows in addition to our FAQs. Actions are similar to intents, as they are requests that might require a more complex flow path including follow-up questions or additional tasks like sending an email or withdrawing information from a third-party database.

Each action - similar to intents in a Classification node - will have an exit point that can be connected to a new flow. You can add up to ten actions to your node.

For example, for the "Create a Membership" action, we connect it to an input that asks "On which membership are you interested?". Adding action will now lead to the creation of exit points from the node to the conversational flows. Unlike intents, we don’t need to create a large training set with user expressions for each intent. The GenAI node takes care of that for us.

Under Configurations, we set the Creativity, and Waiting Time according to our preferences. Choose between 'None', which means the node will stick to your Knowledge Base when generating an answer, all the way up to 'High', which gives it complete freedom.

PRO TIPP If the knowledge base cannot answer a user's request, the “fallback” tab comes into play. It also makes sure that the flow can continue if the agent fails to understand the user. I build it out to pick up a message and have the human agent give them a call back later. Feel free to design it based on your use case!

Testing the Generative AI Node

After we finish building our virtual agent, access the Tester button located in the top-right corner of the screen. Initiate the conversation by clicking “Start the conversation”. The user can then enter a question, and the virtual agent will quickly answer information gathered from the knowledge base description. The following examples show how the knowledge base enables the handling of multiple intents and how actions are handled.

What’s next?

The integration of GenerativeAI in Vonage AI Studio has been a game changer in virtual communication. It enables various ways of interaction, the virtual agent can easily recognize multiple questions and provide helpful information.

It’s the beginning of a never-ending adventure! The future of GenAI only holds possible improvements and innovation, which will make our digital interactions richer than ever. Not only would virtual communication be efficient, but It would also be deeply human. Make sure to keep your eyes peeled for more fun features like the GenAI node!

So what’s next? Follow these steps and start building your agent-building journey!

Last updated