Listen

The agent will get into “Listening” mode and wait for a user input (text, voice, DTMF).

The user input will be caught as a global parameter value for later use. Using the Listen node, the virtual assistant will have the ability to record the user input and isolate the recording for later use.

The Listen node might remind you of the Collect Input node as it has very similar features (DTMF, etc.), however, it does not hold a prompt.

Use it if you want to catch everything the user is saying, perform different actions based on that input and only ask follow-up questions if necessary - similar to the intent annotation. However, when we use a parameter with sys.any, followed by a Classification, we will use the Listen node instead.

When to use the Listen node?

Usually, the agent is asking the user “What is the order number?” in a Collect Input node. The caller replies “That’s 2348839” which will be caught in said Collect Input node.

But sometimes we don't want to limit the caller input to a number, as the caller might also say “I don’t know”, "I don't have an order number" or "I want to speak to a human".

Therefore, we will add a Listen node that will catch everything the caller is saying and allows us to perform actions on that input later on. We will be able to either catch the order number, as well as other input such as "I don't know" - both of which can be classified into the correct intent with the relevant training set in a Classification node.

We will prompt the caller instead of in the Collect Input node, in the Speak node. We won't need the Collect Input here right in the beginning, as we collect the entire user input in the Listen node and only perform actions on that input after.

How it works

Step by step -

Step 1

Start the flow by adding a Speak node with the prompt “May I ask for your order number”.

The Listen node can help, if the caller now doesn’t give the order number, but instead says “I don’t know” or “Where can I find it” or “I want to talk to customer service”. You will want to be able to accommodate these kinds of cases, to show the flexibility of the virtual assistant.

Step 2

Now, add the Listen node, and select the same parameter and connect the exit point of the Collect Input node with the entry point of the Listen node.

Step 3

Add the Classification Node with the relevant Intents you want the agent to classify into.

For example, Intent name “Order tracking” with the training set “I want to track my package” as well as “My order number is 23438999320”. Add the intents that handle “Where can I find it”, “I want to speak to agent”, etc., and add the relevant training set. The agent will now be able to classify into the right intent by taking into account the entire user input.

Last updated