Q&A Node
⚠️ This feature is currently in public beta. We welcome feedback and may make changes based on usage and input. While it can be used in production, please do so with awareness of potential updates.
Last updated
Was this helpful?
⚠️ This feature is currently in public beta. We welcome feedback and may make changes based on usage and input. While it can be used in production, please do so with awareness of potential updates.
Last updated
Was this helpful?
Knowledge AI is a gated feature. For more information or to enable this feature, please contact or your Account Manager.
The Q&A node allows you to access the Indexes you created under the Knowledge AI tab, to create smooth and informative conversation flows with minimal build time!
Setting up the node requires you to first select one Index that the node will use as its Knowledge Base. All the Indexes that you have previously created under the API key of the selected virtual assistant (VA), that are not being used elsewhere, will be visible to you in the Index dropdown.
Next, you will need to select the parameter that contains your user query under the User Query dropdown. This input will be processed using Knowledge AI (which combines RAG APIs with Knowledge Base Semantic Search and LLM response generation) in the backend.
Please note that there may be a 2 - 5 second latency associated with the processing of the query.
The waiting time will control the amount of time the virtual assistant will wait for the request to respond.
Conversation Design Tip!
Please note that the longer the waiting time, the greater the chance of your conversation flow being disturbed.
Short response times help to mimic the natural tempo of human conversation, whereas longer waiting times prevent failures due to API timeout.
The default value will be set to 3 seconds, and you can choose to specify this value to anything between 2 - 10 seconds, depending on your VA's channel (Voice, WhatsApp, SMS, HTTP) and your use case. The delay is especially noticeable in the Voice channel versus the text-based channels, hence, it needs to be customized as per your requirements.
Depending on your use case, your Virtual Assistant can provide shorter or longer answers. This setting is optional.
For simple, direct questions, shorter responses help users get information quickly. For more complex queries, longer responses may be necessary to provide adequate context and clarity. Choose the response length that best fits your users' needs and the nature of the interaction. The minimum response length is 20 words, but there is no maximum; responses can be as detailed as needed.
Answer length has an impact on the response accuracy & latency, and hence should be tested before deploying in live agents. Typically, a high answer length would generate more accurate answers with high latency, whereas a small answer length would generate less accurate answers with lower latency.
Customize how your Virtual Assistant responds. Guide the tone, style, structure, and level of detail to match your brand voice or use case.
Some of the options include:
Tone: Specify whether the assistant should respond in a formal, friendly, casual, or concise tone.
Topic Restrictions: Set guidelines for topics that should be avoided during the conversation to ensure the assistant stays on track.
Custom Guardrails: Implement additional rules based on issues or gaps identified during testing to fine-tune the assistant's responses.
Company-Specific Guidelines: Ensure the assistant uses your company’s exact terminology, such as using "XYZ Corp" instead of "XYZ" or "we", to maintain brand consistency.
These options help ensure that the assistant's responses are consistent, high-quality, and aligned with your desired communication style.
There are three possible outputs from this node:-
Success: Indicating the successful creation of a response to the user query sourced from the requested Index. The response from the node has also been saved with the response parameter selected in this type of output.
Don’t know: Indicating that the model could not generate any appropriate responses from any of the sources within the Index.
Failed: Signifying an error or timeout of the Knowledge AI solution.
It is important to account for all of these outputs in order to maintain optimal user experience. Learn more on how to build meaningful flows here.
Please note that any changes made to the Q&A node within a published flow will immediately be reflected in the VAs live behaviour without the need to re-publish!
This includes any changes made to the Index, Source and the Q&A node itself. Make sure that your agents are fully tested and ready for use before your agent is live to prevent any unexpected behaviour.
In the event that you have to make changes to a live agent, we recommend making duplicates to test and then making the changes to the live mechanism once you have confirmed that the behaviour is expected.
Your setup can be as simple as collecting input and then using the Q&A node to answer the question immediately.
You can also use the node as a fallback for an existing setup.
Accounting for user input beyond the Q&A node is also quite easy, simply relay the response, add a collection point to gather the end users' next input, and reroute to the node.
Please keep in mind that the node does not hold memory of responses provided previously so it’s safe to either add context to the user query or classify before it comes to the Q&A node.
The response generated from this process then needs to be saved within a parameter which you can select under the Response dropdown. You can then use this parameter within a /, or node to relay it to the end user.
Like every other within AI Studio, the Q&A node needs to be supported by other nodes in order to create a seamless conversational flow for your end users.
You can also take advantage of context digression using the node and the Q&A node in conjunction.