Generative AI
AI Studio with the power of GPT!
In order to use this node, you will need to set up a paid account directly with OpenAI. Click here to learn more.
This node is currently an add-on within AI Studio. To learn more about the pricing for this node, click here.
Want to build a comprehensive and smart conversational assistant that knows nearly everything that exists on the publically available web? Enter the Generative AI (GenAI) node!
Use the power of OpenAI’s Large Language Model to dynamize your virtual assistant to handle your user queries with the knowledge of context-specific nuance and the advantage of having the internet as its data source.
Exercise care when employing this feature.
The Generative AI (GenAI) node is an experimental beta feature leveraging an open-source Large Language Model (LLM) and should be used in production with caution, due to its potential to generate misleading responses.
All information provided in the node will be shared with the AI models (currently OpenAI only).
Here’s how to set it up:-
Gather your user utterance
Use the Collect Input node to gather your user's input as usual. This input will be fed through the Generative AI (GenAI) node to be analysed and acted upon.
Due to the unpredictable nature of the Large Language Model, we recommend using the Generative AI (GenAI) node as a fallback for your regular flow.
You can set this up by using a Classification or Conditions Node as normal to segregate your flow after collecting your user input.
Next, connect the Generative AI (GenAI) node to the Default path of these nodes. This allows you the ability to have control of a majority of your flow using Intents and Entities as usual with the added benefit of handling all kinds of unexpected user input using the vast knowledge of OpenAI's dataset.
This also ensures that sensitive customer information is not sent out to third parties while taking advantage of a truly smart assistant whose knowledge is virtually boundless.
Add the Generative AI Node to your flow.
From the toolbar on the left of your screen, under Nodes within the Integrations Category, select the Generative AI (GenAI) Node, drag and drop it onto the canvas at the appropriate point in your flow.
Choose the Parameter that needs to be analysed.
Once you have added the node, within the User Input Parameter textbox, choose the parameter where your desired user input is stored from a previous Collect Input.
The OpenAI Integration that you may have set up earlier will be displayed in a dropdown list under the 'OpenAI Integration' list. Select the most appropriate one if you have multiple integrations set up for your account.
Enter your Company Name
This step is not mandatory, however, we recommend that you add in your company name so that the LLM model can use it during the conversation if necessary. This ensures that if your organization has a tendency to be confused with another, similarly named organization, the AI model has a better chance of differentiating your company from the rest.
Define the rules of what the Virtual Assistant should be able to reply to your users with
What you enter in the Knowledge Base will define the boundaries that your Virtual Assistant (VA) will be able to perform within. You will need to provide a description of what you want the VA to be able to answer.
The following is an example of a Knowledge Base written for a resort provider:-
"Vonage Resorts offers water park and amusement park fun for the whole family. Guest stays include access to our heated water park kept at a warm 30 degrees year-round. Other attractions offered include mini-golf, ziplining, paragliding, and camping activities. Guests can take advantage of resort deals and special offers, such as our dining packages, activity passes, spa packages, and more. Smoking is not allowed in any area of the resort or water park. This includes the use of electronic cigarettes and smokeless tobacco. A penalty will be applicable in case restrictions are not adhered to. The water park juniors area and certain rides within the amusement park are wheelchair accessible."
This of course is incredibly detailed but it makes sure that the virtual assistant has access to strict information that it can use to reply to your user.
This also gives you the ability to formulate further flows based on the Generative AI (GenAI) Node's response.
For example, in this particular use case, you can also connect a Conditions Node to check to see if the response includes information about the waterpark and further build out a flow to provide an SMS with a direct link to book a waterpark pass or even connect them with a Live Representative using the Live Agent Routing Node.
If you choose to include parameters in the description, the characters within the parameter name will also be counted as part of the total 6000-character limit.
Select the Output Parameter
Once the virtual assistant has pulled the response from the LLM model, the output needs to be stored in a parameter.
Make sure that your output parameter has the sys.any entity type attached to it.
Similar to mapping a response, the output stored in the parameter can then be used and manipulated through the conversation.
Add in Actions
Similar to intents, now the GenAI node allows you to add Actions for the LLM to recognize.
This gives you the ability to formulate further flows based on the Generative AI (GenAI) Node's response, particularly for the topics that you do not want the LLM to handle.
Each Action has a character limitation of 40 characters.
Currently, you will be able to set up 10 actions for the node to recognize. Each action you create will create a new exit point in the node in order for you to create specified flows for all of the Actions.
Configurations: Set the Creativity Level and Waiting Time
The Creativity Level slider controls the randomness adherence of the VA’s responses to the description. The closer it is to the 'None' value, the model will stick to the description and the higher the slider is moved, the more freedom the model will assume it has to respond to the user. The default is set to 'None'. You will have the ability to set this value anywhere between 'None' to 'High'.
Waiting Time will control the amount of time the virtual assistant will wait for the API request to respond. Please note that the longer the waiting time, the greater the chance of your conversation flow being disturbed. The default value will be set to 15 seconds, you can choose to specify this value to anything between 10 - 25 seconds.
Set up the rest of your flow
Now that you have set up the Generative AI (GenAI) Node you can use the output parameter to determine the course of the conversation.
You can simply put the parameter in a speak node or even use it in a Conditions node to change the direction of the conversation based on deliberated conditions.
You also have the ability to designate the flow beyond the node using the node's three exit points.
The ‘Successful’ exit point (coloured in white), refers to a valid response returned from the model.
The ‘Fallback’ exit point, refers to a scenario where the model was not able to provide a response to the user's input.
The ‘Failed’ exit point refers to an API error or timeout.
The 'Conversation Ended' exit point is triggered when the node recognizes that either the end-user or GPT has ended the conversation.
Before you go ahead and publish your agent however please keep in mind the following:-
Latency: There may be a delay in response from the agent due to the data being processed by a third party. Make sure to account for this by either letting the user know (you can include multiple prompts along the lines of “Okay, let me process that for you” to allow the virtual assistant to randomly choose from during the conversation) or even by putting in elevator hold music!
Response control: The responses provided by the agent will not be hard coded or static in this case. The major difference between using Intents and Send Message nodes in comparison to this node is that as a designer you will not be able to define the response of the agent. You can only provide guidelines for it to work within, hence please note that the responses may be unregulated.
Last updated