Q&A Node
Building an Intelligent Virtual Assistant with Knowledge AI
Looking to create a smart virtual assistant using your own data? Knowledge AI in AI Studio empowers your Virtual Assistant (VA) to generate accurate answers based on your content. Instead of depending on predefined intents and entities, Knowledge AI directly utilizes your uploaded or linked materials, called Sources, to respond contextually to user inquiries.
When used with the Q&A Node, Knowledge AI leverages a system based on RAG (Retrieval-Augmented Generation). It combines semantic search with Google Gemini’s large language models (LLMs) to produce relevant answers efficiently.
This node requires the Knowledge AI tab to be set up before use. Learn more here. 🔍
What is the Q&A Node?
The Q&A node allows the Virtual Assistant to access the Indexes you’ve created in the Knowledge AI tab, enabling smooth and informative conversation flows with minimal build time.

It is designed to dynamically generate responses by pulling from your uploaded Sources via an Index, using Knowledge AI’s RAG (Retrieval-Augmented Generation) pipeline, which includes Vonage’s proprietary Semantic Search and Google Gemini's LLM response generation.
Pro Tips 🔥
Managing Indexes Across Multiple Q&A Nodes and VAs
You can use multiple Q&A nodes in the same VA to handle different knowledge domains. → Assign a different Index to each Q&A node.
Indexes are restricted to one VA per API key. → If you can’t find an Index in the dropdown, check if it's already in use by another VA.
Want to use the same content across multiple VAs under the same API key? → Duplicate the Index and assign it to the other VA.
💡 Duplicating Indexes lets you reuse knowledge content without cross-VA conflicts; ideal for scaling VAs efficiently.
Setting Up the Q&A Node
To configure the Q&A node:
Select an Index
Choose one Index that will serve as the Knowledge Base for that node. Only Indexes created under the API key for the selected VA — and not used elsewhere — will appear in the dropdown.
🔍 Learn more about grouping your Sources into Indexes here.
Each Q&A node is restricted to one Index, but you can use multiple Q&A nodes within a single VA to access different Sources across different parts of the flow.

Configurations
Handle Follow-Up Questions (Optional)
This setting is available in both the Q&A Node and the Index Tester. It controls how Knowledge AI interprets user queries within a conversation.
The Handle follow-up questions toggle allows Knowledge AI to understand and respond to follow-up queries based on the ongoing conversation, without requiring the user to restate previous context.

➡️ Example 3: Extended Context Example
🟢 Handle Follow-Up Questions ON End User: What animals are allowed on a plane? VA: Passengers can bring up to two animals (dogs or cats) in approved containers, either in the cabin or cargo hold. End User: Give me more details! VA: Containers must not exceed 118 cm (55 × 40 × 23 cm) or 47 in (22 × 16 × 9 in), with a total weight of 8 kg. They must be leak-proof and lined with absorbent material.
⚪ Handle Follow-Up Questions OFF End User: What animals are allowed on a plane? VA: Passengers can bring up to two animals in approved containers. End User: Give me more details! VA: Sorry, I don’t know.
Pro Tip 🔥
Keep this toggle ON 🟢 for smoother, more human-like interactions.
Turn it OFF ⚪ if each query is unrelated, or if lower latency is required.
⚠️ Always test your VA flows to understand how context affects accuracy and response time.
Answer Length (Optional)
Use Answer length to control how detailed Knowledge AI’s responses should be.
Choose whether the response should be:
Shorter: Good for simple answers
Longer: Best for complex, context-heavy queries
Minimum: 100 characters/ 20 words - no upper limit.

Longer answers increase accuracy but also increase latency. Always test what works best for your use case.
If not defined, Knowledge AI automatically determines the best answer length based on the query and the Knowledge Base.
If defined, Knowledge AI treats it as a soft constraint. The response will generally fall within a small range of the specified value.
Exact control is not guaranteed due to the non-deterministic nature of the LLMs used by Knowledge AI.
Response Guidelines (Optional)
You can set the tone, format, and boundaries of responses with custom instructions.
Tone: Formal, friendly, concise, etc.
Topic Restrictions: Prevent the assistant from veering off-course.
Custom Guardrails: Rules for responses based on testing feedback.
Company-Specific Terminology: Ensure branding consistency (e.g., use "Vonage" instead of "we").
These guidelines help you match your assistant’s voice to your brand and use case.

Knowledge AI operates in single-turn mode:
The Q&A Node processes one input and returns one output.
After returning the response, the node is marked as completed.
It does not support back-and-forth conversation. If you want to reuse the same Q&A Node, add a loop in your Virtual Assistant flow.
🔍 Learn more about how to set up your Q&A node in your VA flow here.
Waiting Time
This determines how long your VA will wait for a response.
Default: 3 seconds
Customizable range: 2 - 10 seconds

Pro Tip 🔥
Shorter wait times feel more natural in human conversation.
Longer wait times reduce API timeout risk but may affect flow pacing, especially for Voice channels.
Managing Outputs
The Output mode setting defines how Knowledge AI processes and returns information from your Knowledge Base.

There are two modes available:
➡️ Search & Respond
This is the default output mode used by Knowledge AI.
In this mode, Knowledge AI retrieves information and generates a refined response that can be sent directly to the end user.
➡️ Search
The Search mode performs a knowledge search and retrieves relevant information without generating a summarized response.
This mode is useful when you want to access the raw search results for further processing by other systems or AI components.
⚙️ Diagnosing Knowledge AI Outputs
If Knowledge AI gives incomplete or incorrect answers, check its behavior manually using AI Studio Reports or upcoming Knowledge AI Insights.
Common Issues and Fixes
Ambiguous user question
Query unclear or incomplete.
Ask the user to clarify or rephrase in the VA flow.
Outside Knowledge Base scope
The index lacks relevant data.
Add new, relevant material to the Knowledge Base.
Search issue
Information exists but isn’t being retrieved.
Review and optimize source formatting.
Partial or inaccurate answer
Model retrieved but misunderstood content.
Improve Source structure or revise Response Guidelines.
Using the Q&A Node in your VA
The Q&A node is most effective when used as part of a broader conversational setup. Examples include:
Collect input ➜ Run Q&A node ➜ Return result
Use as a fallback if other nodes fail
Route back to the Q&A node after collecting more context
The Q&A node doesn’t remember previous answers. Add context before sending the query if needed.
You can also use the Context Switch node to allow the VA to pivot between topics.
Pro Tip 🔥
If one Index doesn’t return an answer, route the “Don’t Know” path to another Q&A node with a different Index.

What's next
Now that you understand how to configure and optimize the Q&A node, you're ready to start building experiences that not only respond but respond smartly.
👉 Next Steps: Ensure your Knowledge AI setup is complete and thoroughly tested. Then build and scale Q&A nodes across your assistant flows to maximize their value and performance.
🔍 Related Links
Last updated
Was this helpful?

