Use case Example : Online Shopping

Online Shopping Experience with HTTP-integrated virtual agent in AI Studio

The fashion industry has always been an ever-evolving world and when it comes to buying, technology can only make improvements. Online shopping is one of those big topics these days, there is no need to physically go into stores to spice up our wardrobe. But sometimes without the right information can be frustrating.

Adding a virtual agent can improve the customer experience while shopping by having a point of contact on the website, e.g. via a webchat Widget, to direct questions to and that way makes my customer experience more seamless and hassle-free.

That's why I chose the HTTP channel on the AI Studio for my virtual agent tutorial. As an all-rounder channel, I can integrate the HTTP agent into any third-party application, and for today's example, I'm choosing Webchat. When training my agent and creating its knowledge base, I wanted to make it as easy and quick as possible. So I took advantage of one of AI Studio's latest features: the Generative AI node.

The GenerativeAI node has a Knowledge Base Description feature that saves us the creation and training of intents and only requires a paragraph of text for its knowledge base. In addition to the existing knowledge base, the node now allows us to add intents and actions directly into it. There is no need to add the Classification node to the flow in case you want to add more complex use cases apart from FAQs.

Prerequisites

To access the Generative AI node, you must register for a paid account directly with OpenAI. Sign up with OpenAI by clicking this link and creating an account. To get started on the AI Studio, you must first join up for a Vonage Developer Account. Link here

Setting up Your OpenAIAccount for AI Studio

You must update your AI Studio account with the most recent instructions for integrating generative AI to follow this course. Follow these instructions

FAQs in AI Studio with Generative AI Node

I love shopping and sometimes I happen to look for some help in answering my questions when purchasing items or need to track my order. This is why I thought that the FAQs for an online shopping service could be a great example for my use case. Therefore, I built a Virtual Agent using the GenerativeAI node for the fashion brand Bee Clothing that allows customers to answer FAQs.

By integrating the GenAI node with HTTP for FAQs, users can text their questions, and OpenAI’sGPT will generate informative and tailored responses.

Let’s kick off the conversation!

We’ll begin by adding nodes, starting with Collect Input followed by the GenAI node. Starting from collecting input we introduce ourselves by saying “Hello girl! How can I help you today?”

How to Set the Generative AI Node

We then move to the GenAI node where we are going to insert the knowledge base we want the agent to utilize. But first, we select the integration that would be a “demo”.

Now is the time to add the User Input Parameter that we name “input”. Feel free to choose your name, it doesn’t have any effect on its functionality. If this parameter doesn’t already exist, we can create one by clicking on “create parameter”.

To understand and capture the input of the user we need parameters. Parameters help the agent to collect a user’s information so this way it will be easy to detect information from the user’s input. Parameters are in many nodes e.g. in the Collect Input, Classification, and Generative AI.

The input will convey the user’s intent and help the node to understand the user’s context. We specify the parameter’s Name (input) and the Entity, naming it @says.any. Returning to our parameter settings, we select the one just created “input”.

Moving forward, we insert the Company Name, in this case, “Bee Clothing”.

Let’s add the text for our Knowledge Base around my online shopping-centered FAQs.

Here we focus on adding all the data needed to provide helpful information to the user. Here we set the specific information we want the agent to possess and will define the boundaries which the virtual agent will be able to perform within. Once the Knowledge Base is set we select the Output Parameter which is the one that will be used for providing the answer, called “output”.

Finally, we can move to Actions. Actions will be handling complex requests in addition to our FAQs. Each of them will have an exit point that can be connected to different inputs and can be added up to ten.

When making purchases online I want to make sure that my order is being processed and taken care of, so I can track it. Also, in case I see in the confirmation email that I’ve inserted the wrong shipping address I’d like to make changes on time.

Therefore, I thought that two different flows interesting for my use case could information regarding the order and the shipping address. This is why I choose to insert the first action as “Track Order” and the second one as “Update Shipping Address”.

And lastly, we define the Configuration. Here we can decide how long we want the user to wait for an answer and define the creative limits within which it can respond. Under Configurations, we set the Creativity, and Waiting Time according to our preferences. Choose between 'None', which means the node will stick to your Knowledge Base when generating an answer, all the way up to 'High', which gives it complete freedom.

The GenAI node acts upon the input received through the collect input node. To generate answers, the Generative AI node requires information collected by the first Collect Input node, which has been set with the "input" parameter.

Building the Flow

Moving on to a new Collect Input node we start building the flow by connecting this one called “Tracking Order” to the Condition node. The Collect Input node will ask “Would you like to have any help tracking your order?” and will be connected to the next node, the Condition node, that will receive the input. In this case, we have to set the parameter of the Collect Input node as “confirmation”.

In the Condition node, we classify agents’ responses based on entities. This requires the insertion of a parameter, that should be the same as the one used in the collect input node which is “confirmation” that has @sys.confirmation as the entity.

I choose to use this node so I can direct the conversation and by doing that, I can differentiate the response of the virtual assistant.

When the GenerativeAI node cannot provide an answer to the user's input, the Fallback exit point comes into action.

At this point, the Counter is set after the fallback to ensure that the conversation keeps flowing. The Counter is triggered each time the fallback is used and keeps track of the time.

In my case, I have set two numbers of retries, which are connected to the "Send Message" input. This allows a message to be sent to the user without expecting any input.

Testing the Generative AI Node

Once the building agent is done we can test the abilities of the GenAI node to handle multiple intents while keeping the context. I wanted to check the ability to handle multiple intents, so I decided to ask two questions at once and this was the result.

Also, the inputs previously connected to the section Actions of the GenAI node have created a conversational flow that step by step has conducted the agent and the user to the completion of the intent. Here’s an example:

The HTTP agent was created to meet users' needs by providing quick and tailored answers using the knowledge base in the GenerativeAI node. This proved to be an effective tool for handling different intents and maintaining the context of user's questions. For users like me, it is crucial to have assistance when tracking an order, especially since it is fundamental for shopping lovers.

Follow these steps and start building your HTTP agent!

Last updated