Define Responses from Alexa for Alexa Conversations
• GA:
en-US
, en-AU
, en-CA
, en-IN
, en-GB
, de-DE
, ja-JP
, es-ES
, es-US
• Beta:
it-IT
, fr-CA
, fr-FR
, pt-BR
, es-MX
, ar-SA
, hi-IN
When an interaction between the skill and the user triggers Alexa Conversations to invoke an API in your skill code, you use a response to format and return the API output to the user. You do so by using the Alexa text-to-speech (TTS) engine.
For example, when the user asks a weather skill for the weather, the output of the GetWeather
API might be the city, date, high temperature, and low temperature as follows.
User: What's the weather?
Alexa: In what city?
User: Seattle.
Alexa: In Seattle, it's 70 degrees.
You define a response for Alexa to pass this information to the user with TTS that includes variables. In the previous example, you might define the TTS to be In {returnedWeather.city}, it's {returnedWeather.temperature} degrees.
You specify the TTS or visual response as Alexa Presentation Language for Audio and Alexa Presentation Language (APL), respectively.
To get started with Alexa Conversations, you can step through a pet match skill-building tutorial, download the code for a pizza-ordering reference skill, or use an Alexa-hosted sample skill template that includes an Alexa Conversations skill configuration and back-end skill code.
Overview of responses
Responses consist of the following elements:
- Audio response – (Required) The output is the text-to-speech (TTS) and, optionally, the visual response that Alexa provides to the user. The TTS is in an Alexa Presentation Language (APL) for Audio document. For details about APL for Audio, see APL for Audio Reference.
APLA documents can incorporate input arguments that pass into the response.
Continuing the previous example, the APLA document might contain the following text in itscontent
field:In {returnedWeather.city}, it's {returnedWeather.temperature} degrees.
Note: You can't use the DATA section of the APLA editor (accessible from the left pane) to inject static data into a template. The DATA section is only for testing. The only data that's injected to thedataSources
object of the template is the one you assign to the API response when you write your dialog. -
Visual response – (Optional) To specify a visual response, you use Alexa Presentation Language. For details, see Alexa Presentation Language (APL).
- Arguments – (Optional) If a response requires information from the API that triggered it, you specify input arguments to the response. For example, perhaps you create a weather skill. Your
GetWeather
API might return areturnedWeather
slot type that contains a field for the city, date, high temperature, and low temperature. Alexa Conversations passes these fields to the response template as an input argument.
Use multiple responses
To reduce the number of dialogs, you can have multiple responses in a single Alexa turn. Adding multiple responses reduces the number of dialogs because you can cover any combination of arguments (when requesting arguments), confirming arguments, or confirming an API. The user interface also suggests responses when possible, to help you increase coverage of dialog variations.
Confirm arguments
Alexa can confirm arguments in a single turn, and then, if the user denies the arguments, Alexa can request the arguments again. To use this feature, use the Confirm Args or Confirm API dialog act when you annotate an Alexa response. For the response, specify a prompt such as, "Do you want the weather for {city} on {date}?" For a tutorial that demonstrates how to confirm arguments, see Tutorial: Confirm API Arguments for Alexa Conversations.
Example
For examples of how to create responses while annotating a dialog, see Step 5: Annotate the Alexa turn that invokes an API and Step 6: Annotate the Alexa turn that asks for information in Tutorial: Annotate a Dialog for Alexa Conversations.
Related topics
- APL for Audio Reference
- Alexa Presentation Language
- Dialog Act Reference for Alexa Conversations
- About Alexa Conversations
- Get Started with Alexa Conversations
Last updated: Nov 27, 2023