Voiceflow
Voiceflow is a chatbot builder that allows you to design, prototype and launch voice chatbots. To connect your Voiceflow assistant to Digital Person project follow the below steps:
Creating a Voiceflow API Key
Create a project - Build a conversational flow on the Voiceflow Creator and choose “Design & Handoff” option.
Choose “Design & Handoff”Create Assistant - Select modality as “chat” or “voice” and the default “Voiceflow” NLU in the Create Assistant screen.
Choose “Voiceflow” NLURender your project - Ensure you're on the Designer tab by checking the left side bar. Then, click the Run button on the top-right corner to compile your project.
Train your assistant - Click the "Train Assistant" button in the Prototype tool to train the NLU model.
Note that the train option will not be available if your conversation does not have an intent.Get an API key - Go to the Integration tab using the left sidebar and copy your project's API key.
Connecting Voiceflow Assistant to your Digital Person
Choose Voiceflow NLP from the Manage Conversations and Skills section of Digital DNA Studio. Refer to Creating a Project section for detailed instructions on adding an NLP and skills to your project.
Create a DDNA Studio project with the Voiceflow skill as a base conversation
Use the API key to configure the skill. Set the toggle to use the “development” version (see version support section for more info). Enter the API key, click the toggle, and click “Enable as base”.
Version support
You can select between development and production modes using the Use Development Version of the voiceflow project toggle button in the skill configuration.
Development lets you connect to the version displayed on the Voiceflow canvas, production will connect to the version that has been published. Check details on how to update each version here: https://developer.voiceflow.com/reference/overview#updating-your-version.
Fallback support
Messages will be flagged as fallback when the intent returned by Voiceflow is None
. Fallback messages are necessary for interoperability with other skills.
Two ways of specifying fallback responses with Voiceflow are:
Using the built-in Fallback intent:
Using a Global No Match response:
Content Card support
Voiceflow content blocks
Currently we support the following Voiceflow content blocks:
Text
Image
Buttons
Cards
URL Support
Links/URLs are currently supported for text blocks, but not within the descriptions of the card block. If a text block contains a valid URL, it will be stripped out of the spoken text and a clickable link card will be visible with the extracted URL.
Soul Machines content cards
Voiceflow supports all types of Soul Machines Content Cards through the Dev Custom Action
block.
The Custom Action
block currently requires a Voiceflow subscription.
The name of the Custom Action step must be smcard
, and the Action Body is expected to be a JSON object where the text
property contains the output text, and the variables
property contains the content card variables.
If desired, you can connect a block to the default path in the action step, to continue the conversation from that point; multiple paths are not supported.
Video example
{
"text": "here is a video @showcards(card)",
"variables": {
"card": {
"type": "video",
"id": "youtubeVideo",
"data": {
"videoId": "vxmthHfkoaw",
"autoplay": "true",
"autoclose": "true"
}
}
}
}
Options example (connected to choice block)
{
"text": "Do you like cats? @showcards(card)",
"variables": {
"card": {
"type": "options",
"id": "options",
"data": {
"options": [
{
"label": "Yes"
},
{
"label": "No"
}
]
}
}
}
}
Options example (connected to capture block)
{
"text": "Here are some options @showcards(card)",
"variables": {
"card": {
"type": "options",
"id": "options",
"data": {
"options": [
{
"label": "First Option",
"value": "the first one"
},
{
"label": "Second Option",
"value": "the second one"
},
{
"label": "Third Option",
"value": "the third one"
}
]
}
}
}
}