Collect Input
Last updated
Last updated
Fill in a specific parameter value based on user input. In every Collect Input node, the virtual agent will prompt a question to the user. If the input is matched to the correct entity type bound to the parameter, the parameter value will be captured and stored.
To fill in a parameter value, select the name of the parameter you would like to fill, and add the prompt.
If you wish your agent will try to capture another input after failing to do so on the first try, you can add any number of retries. For each retry attempt, the agent will prompt using the same/different prompt until the value is filled.
You can either use the robotic voice you selected upon agent creation to give out the text, or upload a human voice recording.
The supported recording file types are: wav, mp3, ogg. Maximum file size is 4MB.
You will notice the two other tabs next to the parameter:
"No Input" - When for some reason the caller was unable to collect the input of the caller, e.g. when the caller stayed silent on the other line. The agent will repeat the prompt (how many times depends on the number of retries you added) before triggering the "No Input" flow. You can define what behaviour should follow if no caller input could be collected twice, e.g. route the call to a live representative. See below how you can define how long the agent will wait for input from the user.
"Missed" - When the input of the caller doesn't match the selected entity. The agent will repeat the prompt (how many times depends on the number of retries you added) before triggering the "Missed" flow. You can specify what kind of behaviour you want the agent to follow in such a case, e.g. route the call or ask a follow-up question to redirect the flow.
If the parameter value has been collected on a previous node (or even the same node in case the caller went back to the same node during the same conversation), you can choose to skip this Collect Input node and keep the original value collected. If you like to override the value, leave this box unchecked.
The caller can decide to respond either via speech or using the keypad (DTMF). You can toggle on both speech and DTMF if you’d like to give the caller to respond via both inputs. At least one of these needs to be switched on.
"Detect Silence" - You can also control how long the system will wait after the user stops speaking to decide whether the input was complete. The default value is one second. The range of possible values is between 0.4 to five seconds.
"No Input" - You can control how long the system will wait for the user's input by adding a number of seconds to the "No Input" field in the node. The range of possible values is between one second and sixty seconds. Once this time frame passes, the agent will trigger the retry logic until it will reach the last retry and move to the "No Input" flow. You can add as many retries as you see fit.
"Context Keywords" - To improve recognition quality if certain words are expected from the user. The agent will look out for these words in the caller's input and e.g. help classify them into the proper intent.
"Should Record" - Choose the "Should Record" option to record and generate a short audio file of the value collected in a parameter. Once the recorded parameter has been filled by a caller, the system will generate a unique URL including the voice recorded value for later use.
The caller has the option to respond using the keypad. The following settings are related to the keypad:
"Time Out" - Set how many seconds the caller after the user completes the activity, the result is submitted. The default value is 10, max is 60. The "Time Out" value will be the same as the "No Input" value if both Speech and DTMF are toggled on.
"Max Digits" - The number of digits the user can press. The default is 20 digits, which is also the maximum.
"Submit on Hash" - Choose 'yes' if you'd like the caller's response to be submitted following the # key.
AI Studio now allows you to enable your users to interrupt your virtual assistant to provide their input!
This is helpful incase of returning customers who may already know extension codes or the options within your agent and are in a hurry.
To accommodate returning users and help create a customized experience for them, create Users Parameters in order to skip collecting information they may have already provided.
To enable barge in, you must go into each Collect Input node that you want it to be enabled in, scroll to the bottom of the node and toggle the switch.
Enabling barge in switches on the ability to interrupt the virtual assistant with both speech and DTMF input.
Make sure to adjust the noise sensitivity in order to make sure that the virtual assistant is not “barged in” on by background noise.
To fill in a user ID that contains numbers only, create the parameter “USER_ID” and assign the @sys.number entity to that parameter. A good prompt might be “What is your ID number please?”. You can also choose to dictate the flow of the conversation based on a value-filled by using the “Classification” or “Conditions” nodes.
Use SSML within your agent to create a more comfortable experience for your users. You can use it to vary the rate of speech, pitch, say selected material as certain types of input like digits, dates, numbers etc. It also helps to provide a human touch and not make the user journey robotic. Users tend to be more patient with human sounding voices compared to robotic ones. You can also incorporate something like neural voices. Please be sure to include a disclaimer at the beginning of your virtual assistant's journey to make sure the user knows they're talking to AI!