Lab 4: Add a Multimodal Response to Your Skill
Welcome to lab 4 of your skill. In this lab, you'll learn how to add multimodality to your skill by using APL.
Time required: 30 minutes
What you'll learn
- How to add a multimodal response
- How to use multimodal response in your dialogs
- How to make changes in your skill backend to support APL
- (Optional) How to use APL templates on welcome and out of domain responses
Introduction
In lab 3, you built an Alexa Conversations skill that supports voice-only responses. When the requested flight is found, the skill will prompt the voice-only response which contains the details.
In this lab, you will learn how to extend the response by adding visuals. We will add an APL template to the flight search result to enable multimodality in APL supported devices, such as echo show.
Step 1: Enabling APL support in skill manifest
First, we need to enable APL support in the skill manifest so that our skill can support multimodality. To do so, we need to make changes in "skill.json" file under "skill-package" directory.
We need to add "interfaces" under "manifest.apis.custom":
Your skill.json file will look as indicated below.
Step 2: Add an APL template
Similar to the APL-A templates (voice responses) we created in the previous lab, we will add an APL template to support the visual responses.
- First, create a new folder named "display" under the "skill-package/response" directory. This folder will contain the APL files that your skill uses.
- In order to create an APL file to support flight search results, we will create a new folder under "display" called "FlightSearchResponseVisual".
- Next, create a "document.json" file under newly created "FlightSearchResponseVisual" folder and copy the below code:
Step 3: Use APL template in FlightSearch dialog
First, we need to import the APL to our skill. To do that, you need to add the following code:
This will import all APL templates under the display folder.
We want to return Flight Details to the APL template we created. To do that, we need to extend our payload with new type:
As you see above, we used String primitive type for the Display type we created. We also need to import String type into our ACDL file. To do so, we need use the following import statement:
Now, we need to make the change in our dialog. In our last response we have:
We will need to add our APL template to the MultiModalResponse. After adding the APL, your response will look like:
Here is the snapshot of your ACDL file:
Step 4: Update your skill backend to support APL response
We added our response to our dialog, but this response will require some inputs such as header title, primary and secondary text, title text and text alignment. We defined the payload data in the APL file as follows:
We need to send the payload from our API to the APL template for the data it requires.
- Let's start with expanding our database(flight-data.json) to support airport names, arrival and departure times for the locations. We will show these data on the screen.
- Now, we can extend our response for the APL template we created. Let's open the index.js file and create an object in flightresponse called "display" and set the parameters.
Our API handler will look like as follows:
Here is the snapshot of the index.js:
Step 5: Deploy your changes and test
We are ready to deploy our changes!
- Open the terminal and navigate to the main directory of the Flight Search skill.
- Compile the code.
- Deploy the code.
- Once the deployment is completed, you can start testing.
(Optional) How to use APL templates on "welcome" and "out-of-domain" responses
We added the first multimodal response to our skill and you were able to test it. This section is optional if you want to learn more about how to add multimodal responses to the skill level responses such as welcome prompt.
-
We need to create APL template for "welcome" and "out-of-domain" responses. To do that, let's create two new folders under skill-packages/response/display called "WelcomeResponseVisual" and "OutOfDomainResponseVisual"
-
Now, we should create "document.json" file under each folder we created.
- We will create a "welcome" prompt as below:
WelcomeResponseVisual/document.json APL code will be as follows:
- "Our "out-of-domain" visual response will look as indicated below:
APL code for OutOfDomainResponseVisual/document.json will be as below:
- We need to use skill action to set the skill-wide assets. You can learn more about of skill actions here.
Your ACDL file will be as following snapshot:
- Compile the code.
- Deploy the code.
- Once the deployment is completed, you can start testing.
Wrap-up
Congratulations! You are now equipped to develop an Alexa Conversations skill by using ACDL and have learned how your skill can support multimodality.
Code
If your skill isn't working or you're getting some kind of syntax error, download the code from the github repository.