Overview: Say + Intent workflow module (Console)
Console > Workflows > Workflow modules > Overview: Say + Intent workflow module (Console)
This article provides an overview of the Say + Intent module in workflows in Console.
In this article
Overview
The Say + Intent workflow module is a tool used for determining what a customer’s goals and/or intentions are for an interaction. With the Say + Intent module, you can create Interactive Virtual Assistants (IVA) to route the customer through a workflow to a destination that best meets their needs.
The Say + Intent module can interpret full sentences and, using machine learning, decipher an intent (what the customer wants to do). The module accomplishes this by comparing the customer’s spoken or typed response with a set of keywords that you define. This is called Natural Language Understanding (NLU).
Below are some common use cases for the Say + Intent module:
Interactive Virtual Assistant (IVA): Configure the Say + Intent module to ask an open-ended question like “How can our billing team help you today?” and interpret the customer’s response.
Traditional interactive voice response (IVR): Configure the Say + Intent module to recognize predefined routes like “Say or press 1 to reach our sales team.”
Opting into / out of surveys or further communication: Configure the Say + Intent module to determine whether the customer is willing to complete a survey or receive a callback phone call. “After this call, are you interested in completing a survey” or “After this call, are you interested in a call back?”
Continue reading this article to learn more about how the Say + Intent module works.
How it works
The Say + Intent module enables you to engage with and route customers through a workflow based on the customer’s intent. This gives the customer the power to more easily and effectively communicate what they’d like to do.
When a workflow reaches the Say + Intent module, it detects the customer’s communication type and returns the appropriate prompt that you configured for the module (the phone prompt for phone interactions, the messaging prompt for messaging interactions, etc.).
For example, you might configure the greeting for a voice caller to be “Thank you for calling” while having the messaging greeting be “Thank you for messaging.”
It’s common to configure multiple prompt options to use when engaging with the customer. This tends to create a more welcoming, natural experience for the customer. For example, for voice interactions, one prompt could be “Hi there, thanks for calling, how can we help?” while another prompt is “Greetings, thanks for calling our team, what can we do to assist you?” One of the prompts will be randomly selected, so your customers that contact you often are less likely to experience the same prompt repeatedly.
When a customer is prompted to give a response to the Say + Intent module, their response is analyzed and compared against the triggers and intents configured on the module. The triggers and intents determine which branch of the workflow the customers will experience. Continue reading below to learn more about triggers and intents.
Triggers and intents
The Say + Intent module works through triggers and intents. A trigger is a path coming off the Say + Intent module. They’re called triggers because they trigger a path in the workflow.
An intent is a word or phrase used by the module to compare against the customer’s message. They’re called intents because they describe the needs or goals of a customer.
For example, if the Say + Intent module asks “What would you like to do today”, a trigger could be “Pay a bill.” The Say + Intent module routes through this trigger if it detects an intent related to the trigger.
In the screenshot above, the trigger is “Pay Bill” and its intents are:
“Pay Bill”
“bill”
“pay”
“invoice”
This means that, if the module detects any of these words or phrases in the customer’s response, it would route through this trigger to a branch of the workflow related to bill payment.
For web chats, you can also define buttons that appear to the customer.For example, if one of the triggers is to “Pay Bill”, one of the buttons can have a label of “Pay Bill” and a value of “bill”, which would be interpreted as an intent. Therefore, the customer has the option to then either type their response to the bot or to click one of the trigger buttons presented to them to tell the bot where they want to be routed next in the workflow.
Trigger history
Once a workflow is published, you can double-click any of the trigger modules branching off a Say + Intent module to get a breakdown of how often each word was detected in a period of time (ranging from the last minute to the last 30 days).
This is especially useful for the No Match trigger because this data would show you what kind of words customers are using so you can better design your Say + Intent module to better meet their needs. For example, if you’re noticing an uptick in customers asking to talk to someone about their “contract” and that should route down your “Pay Bill” trigger, you can then update the intents for that trigger to include more phrases related to “contract”, etc.
Text to speech
The Say + Intent module can synthesize speech in 40+ languages and variants. With text to speech, you:
Write a message prompt in the language of your choice.
Define the language in the Language dropdown menu.
Define the voice that will be used to synthesize the text in the Voice dropdown menu.
In this example, the Say + Intent module is configured to recite a message (Good day. How can I help you?) in Italian with the voice “Female 1” via text-to-speech
Visual breakdown
Exterior stricture
This is the exterior structure of a Say + Intent module. All modules share this same structure.
Reference the Overview: Workflow modules article to take a deeper dive into each of these components.
Interior structure
Below is the deep dive explanation for each interior area of the Say + Intent module.
General settings
Label: The Label field is where you define a custom name for the workflow module. This can make it easier to see what the module is doing from the workflow editor workspace and this is how the module is referred to in reporting.
Auto Progress: The Auto Progress toggle sets whether the workflow can ask multiple questions in one prompt to fill multiple variables. For example, if Say + Gather is set to auto progress while asking for a user’s email and phone number, the workflow will gather both pieces of information from a user’s response (if the user’s response contains both the email and phone number). This is a way of asking for more information more naturally.
Phone prompts
The Phone prompt section is where you define the prompts to be used for phone interactions. The fields available to you configure depend on which prompt type you select: Text, Audio, or Dynamic Audio.
Text: This option indicates that the workflow will recite the message to the customer using text-to-speech functionality.
Play button: The Play button lets you hear how the text to speech will sound to the customer.
Language: The Language menu lets you select the language of the prompt text. The language should match the actual language of the recited text. For example, text written in Italian should have the Language menu set to Italian.
Voice: The Voice menu lets you select the type of voice that will recite the text. Each language has a selection of “Male” (typically masculine, lower pitch) and “Female” (typically feminine, higher pitch) voices to choose from.
Audio: This option indicates that the workflow will play an audio file to the customer. Before using this option, though, you need to first add audio files in the Audio Upload workspace in Console (CX > Audio Upload). Once they’re added, then the file options will populate in these menus for you to use.
Audio file selector: The Audio file selector menu is where you choose the audio file to use. This is a list of audio files uploaded in the Audio Upload workspace.
Play / Pause / Volume buttons: You can use the play, pause, and volume buttons to test playback of the audio file.
Dynamic Audio: This option indicates that the workflow will use dynamic audio. With the dynamic audio file option, you can choose audio files to play based on previous logic in the workflow (like from a Say + Gather module or a Decision module).
For example, in the screenshot below, the Webhook module retrieves an audio file via the API Platform based on the inbound phone number. This could be an audio file custom tailored to a specific customer (identified by the inbound phone number). The webhook can return a variable or an advanced (soundFileId) value that you can provide to the Say + Intent module.
A workflow containing a webhook to get a dynamic sound file.
In the Webhook module, you can create a variable for the sound file you want to use.
You can use the variable from the Webhook module to play the dynamic audio file.
Advanced phone prompt settings
There are several advanced settings available to you for phone prompts in the Say + Intent module.
Interrupt Prompt: When enabled, the Interrupt Prompt toggle lets the customer provide an answer to the prompt before the prompt has finished playing.
Only Interrupt Prompt on Match: When enabled, the Only Interrupt Prompt on Match toggle only interrupts the prompt if the customer has provided a valid response. If not, the prompt continues to play.
Allow DTMF: When enabled, the Allow DTMF toggle allows the customer to input responses with the phone’s keypad.
Allow Speech: When enabled, the Allow Speech toggle allows the customer to verbally give responses to the prompt.
Timeout (seconds): The Timeout (seconds) field defines how many seconds to wait for a response from the customer.
After DTMF input timeout (seconds): The After DTMF input timeout (seconds) field defines how many seconds the module should wait after a DTMF input to proceed.
Messaging prompts
For messaging interactions, the prompts provided by the Say + Intent module are comparably simpler than those for phone interactions. For messaging interactions, you can simply provide text that will be sent to the customer. You can also add multiple prompts that the system will randomly choose (the same way the phone prompts function). Note that whatever is typed into the messaging prompt is exactly what is sent to the customer.
Note: Though the Text prompt appears as a dropdown selector, the only option available is Text.
Buttons
The Buttons section is where you define buttons to appear in web chats. When a customer opens a web chat that’s hooked up to a workflow with a Say + Intent module, the buttons defined in the module are populated for the user (like in the screenshots below).
Buttons configured for Yes / No on a web chat
The buttons configured in the Say + Intent module in the workflow
Triggers
The Triggers and Intents section is where you define the triggers and intents for the Say + Intent module. Note that all Say + Intent modules start with a default trigger called “No Match” which is what the module routes through if no other intents are detected from the customer.
The label defines the name of the trigger. The Intents box is where you can add intents (by typing them into the text field) or remove intents (by clicking the x next to each intent).