# Live Agent Routing

Ever felt the need to route your text based virtual assistant flow to a human representative? Meet the **Live Agent Routing** node. Allow your human agents to assist your customers with escalations beyond the scope of a virtual agent using the Live agent routing mechanism made especially for text based agents.

<figure><img src="/files/6uD4lGfj6FThzSEHPgyY" alt=""><figcaption></figcaption></figure>

This node allows your end users to interact with live agents without ever having to leave the conversation; the live agent will also have insight into what the conversation consisted of before the live agent routing was triggered. From the reporting perspective it also allows you to view the full conversation with both the virtual assistant and live agent in order to optimize the performance of your live agent.

{% hint style="info" %}
*Best practice is to use the **Send Message** node to notify your end user they are being routed to a live agent before the implementation of the Live Agent Routing node. This can also be done after the conversation has ended with the live agent.*
{% endhint %}

**Here's how it works:-**

{% hint style="warning" %}
*Please note that the live agent routing node will only work for HTTP agents if WebSocket Connection is established. To learn more please visit* [*<mark style="color:purple;">this page.</mark>*](https://studio.docs.ai.vonage.com/http/nodes/action/live-agent-routing/websockets-connections-for-live-agent-routing)
{% endhint %}

***Start connection EP***

The endpoint entered here will receive the live agent confirmation, conversational history and system parameters that have been collected up until this node was triggered in the conversational flow.

![](/files/zlxBCmr3p1oupVVRpgrZ)

Some of the sent data includes Agent ID, Session ID, transcription of the conversation(including both the user and agent utterances), and system parameters.

Here's a request example for this field:-

*Method: Post*\
*Content-Type: application/json*\
*Body*:&#x20;

```
{
“sessionId”: string
“history”: {
“transcription”: [ “user”: string, “agent”: string ],
“parameters”: [ {
“name”: string,
“value”: string
}
]
}
}
```

***Inbound Transfer EP***

This endpoint is where all the inbound messages from the end user to the live agent are sent.

![](/files/ceilMnmDo5MBpCGtpBEQ)

Here's a request example for this field:-

*Method: Post*\
*Content-Type: application/json*\
*Body:*&#x20;

```
{
sessionId: string,
text: string,
type: 'text'
}
```

***Status Delivery EP***

Now your live agents have the ability to see the status of their messages.&#x20;

Enter the endpoint where you want the statuses of your messages to be delivered under the Status delivery EP text box.

<figure><img src="https://lh6.googleusercontent.com/6M7IUbUV0PjMlYYzpJH0y6WEX_hNwrErueRbhxm3gTF2FZodIXNodKQ-wSJZRIvvmTaWGNBeGipw03nAG1dR1aZN8EGjMl5QEeOSG75LU46-UZJcCOI3azW0NPAbJ0R6JT6zQZU8DjONce73lYEUuDI" alt=""><figcaption></figcaption></figure>

Every time the status of a message changes, the live agent will receive an update on what the status is, providing them the ability of making imperative decisions that affect the flow of the conversation.

***Outbound transfer EP***

Outbound messages from the live agent to end user will be sent to this endpoint.

Currently we are only able to send text messages from HTTP agents. The syntax of the body of the Outbound Transfer will be as follows:&#x20;

```
{text:string)
```

![](/files/gtZNewR73E8BCqOtrjuW)

Here's a request example for this field:-

## Outbound Transfer

<mark style="color:green;">`POST`</mark> `https://studio-api-eu.ai.vonage.com/live-agent/outbound/:session_id`

#### Headers

| Name                                           | Type   | Description          |
| ---------------------------------------------- | ------ | -------------------- |
| X-Vgai-Key<mark style="color:red;">\*</mark>   | String | The clients Vgai Key |
| Content-Type<mark style="color:red;">\*</mark> | String | *application/json*   |

{% tabs %}
{% tab title="200: OK " %}

```javascript
{
    // Response
}
```

{% endtab %}
{% endtabs %}

To learn more about the message types, please visit [<mark style="color:purple;">this page</mark>](https://developer.vonage.com/api/messages-olympus?theme=dark).

***Stop Connection EP***

Confirmation that the conversation has ended with the live agent is sent to this endpoint. Once the confirmation has reached, the control of the conversation is given back to the virtual assistant.

<figure><img src="/files/molRnsF9Mgpdd8vkrEmr" alt=""><figcaption></figcaption></figure>

Here's a what the API interface looks like for this field:-

## Stop Connection EP

<mark style="color:green;">`POST`</mark> `https://studio-api-eu.ai.vonage.com/live-agent/disconnect/:session_id`

#### Headers

| Name                                           | Type   | Description        |
| ---------------------------------------------- | ------ | ------------------ |
| Content-Type<mark style="color:red;">\*</mark> | String | *application/json* |

{% hint style="warning" %}
*For outbound, disconnect endpoints, the permanent header is the generated API key from the Studio X-Vgai-Key.* [*<mark style="color:purple;">Here's how</mark>*](https://api.support.vonage.com/hc/en-us/articles/7075840879380)*.*
{% endhint %}

You can also choose to save the whole chat including the parts with the live agent in the reports under call logs.

***Transfer Parameters***

You can now send all the parameters collected during the conversation between the user and the virtual assistant, prior to live agent connection, to the live agent. This includes custom, user and multi-value parameters.&#x20;

<figure><img src="https://lh4.googleusercontent.com/L96Mrq6ggahEYCXMzIip6lBYxi29EAIS2XZA8aMRSrlgcmKLIfSNvuilpQDQ99UZSIs1rVENqqS0RNb3k0Pdzaj12Po9eSv4IunllJPCHZ3cAjfTvitRpMKWuLOTPH6ILh98aAXcEJ-9VG1SGoJBbko" alt=""><figcaption></figcaption></figure>

This feature will help save time on behalf of both the live agent and the user, since no reiteration or scouring of collected information is required to understand the context of the conversation before routing.

***Agent response Waiting time***

The default response waiting time is 6.94 hours. This applies to any response from the live agent from start to finish. You can change this waiting period to suit your needs by selecting the number of hours in the “Agent response waiting time” drop down menu.  &#x20;

<figure><img src="https://lh5.googleusercontent.com/zQUUM9VgIbFjr1y7UOsVoTy_tvG9JHTEWT0CA1IB7L6mri5uixPq6d9UTaeBgjkAUNHehR1IhwHJ0zGqtjZZahQOlD-4gomrUDwfvluAZUU60afYtpvJP_66W1gqCUZttkj-0U-sBVM3263cwUw-1Co" alt=""><figcaption></figcaption></figure>

{% hint style="danger" %}
*Please note that the minimum waiting time is 1 hour and the maximum is 20 hours.*
{% endhint %}

**Here are some things to keep in mind whilst using this node:-**

* Default output will be executed once the live agent is connected and once the conversation is returned to the virtual assistants control.
* In the case that the live agent is not able to connect, output allocated under “Failed” will be executed.
* "No response" output will be executed when no response is received from Live agent after the configured response waiting time.
* The default response waiting time is 6.94 hours. This applies to any response from the live agent from start to finish.
* The text characters limit is currently 4096 characters in total.
* Transfers are asynchronous in nature which means that the message does not have to be ‘delivered’ in order for the transfer to happen. A caveat of this may be that in isolated cases the last sent user input may not be visible to the live agent.

**What if you want to send media to the live agent?**

Media can be used in this node only on the WhatsApp channel. Please refer to the list of support media types below:-

* Images - jpg, jpeg and png.
* Audio - aac, m4a, amr, mp3 and opus
* Video - mp4 and 3gpp. (Note, only H.264 video codec and AAC audio codec is supported.)
* File - zip, csv and pdf.

To learn more about supported media types, please visit [<mark style="color:purple;">this page</mark>](https://developer.vonage.com/api/messages-olympus?theme=dark).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://studio.docs.ai.vonage.com/http/nodes/action/live-agent-routing.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
