Incorporate Input from Alexa-Enabled Devices with a Screen to an Alexa Conversations Skill
• GA:
en-US
, en-AU
, en-CA
, en-IN
, en-GB
, de-DE
, ja-JP
, es-ES
, es-US
• Beta:
it-IT
, fr-CA
, fr-FR
, pt-BR
, es-MX
, ar-SA
, hi-IN
As with other Alexa skills, you can create visual experiences to accompany your Alexa Conversations skill by using Alexa Presentation Language (APL). You implement APL interfaces in Alexa Conversations skills by following the same high-level flow as for custom skills. For details, see High-level steps to implement APL in your skill.
However, to incorporate an APL touch event into an Alexa Conversations experience, you must send a Dialog.DelegateRequest
directive from the handler function that receives it.
Incorporating an APL touch event
To pass the input from an APL touch event to Alexa Conversations, you send a Dialog.DelegateRequest
directive from your handler function.
You populate the name
field with the name of the utterance set, and you populate the slots
array with the relevant variable values.
Example
The following example passes the input from an APL touch event to Alexa Conversations.
{
"type": "Dialog.DelegateRequest",
"target": "AMAZON.Conversations",
"period": {
"until": "EXPLICIT_RETURN"
},
"updatedRequest": {
"type": "Dialog.InputRequest",
"input": {
"name": "FoodAndColorUtteranceSet",
"slots": {
"food": {
"name": "food",
"value": "sandwich"
},
"color": {
"name": "color",
"value": "blue"
}
}
}
}
}
Related topics
- Steps to Add Alexa Conversations to an Existing Skill
- Write Dialogs for Alexa Conversations
- Get Started With Alexa Conversations
Last updated: Nov 27, 2023