Add APL Support to Your Skill Code
The ASK SDK v2 provides support for the Alexa Presentation Language (APL) directives and requests. You can use the ASK SDK with Node, Java, and Python. This topic provides an overview and some examples of how you can use the APL directives and requests in a skill built with the ASK SDK v2.
- Get started with the SDK and APL
- Configure the skill to support the APL interface
- Create an APL document to display
- Make sure the user's device supports APL
- Return RenderDocument in your request handler response
- Handle a UserEvent request
- Return ExecuteCommands in a request handler response
- Related topics
Get started with the SDK and APL
To understand this topic, you should:
- Be familiar with building custom skills.
- Understand how you handle requests in your code.
- Be familiar with the ASK SDK in Node, Java, or Python.
- Read the overview of APL in Add Visuals and Audio to Your Skill.
- Make sure your skill is using an up-to-date version of the ASK SDK:
The sample code shown is based off the "hello world" samples provided for Node, Java, and Python. The sections below show how you might modify this sample to add APL in addition to the original spoken "hello world" output.
Configure the skill to support the APL interface
Before adding any APL directives in your code, you must configure the skill to support the Alexa.Presentation.APL
interface.
If the Alexa.Presentation.APL
interface is not enabled, sending any APL directives causes an error.
Create an APL document to display
To display content on the screen, you need to create an APL document in JSON format and then make it available to your code.
Save the document as a JSON file
Create the document as a JSON file that follows the APL document structure. For example, save the following into a file called helloworldDocument.json
:
{
"type": "APL",
"version": "2024.2",
"description": "A hello world APL document.",
"theme": "dark",
"mainTemplate": {
"parameters": [
"payload"
],
"items": [
{
"type": "Text",
"height": "100%",
"textAlign": "center",
"textAlignVertical": "center",
"text": "Hello World!"
}
]
}
}
This is a very simple document that places a single Text
component on the screen. The text
property defines the actual text to display: "Hello World!". The textAlign
and textAlignVertical
properties center the text on the viewport.
Place the document where your code can access it
Your intent handlers in your skill need to be able to send the JSON file containing your document as part of the RenderDocument
directive. Therefore, place the file where you can access it later.
Save the document JSON file in the same folder as your index.js
file. For example, this directory structure for the "Hello World" Node.js skill sample shows the helloworldDocument.json
document in the lambda
folder, which contains index.js
:
| skill.json
|
+---.ask
| config
|
+---hooks
| pre_deploy_hook.ps1
|
+---lambda
| helloworldDocument.json
| index.js
| package.json
| util.js
|
\---models
en-US.json
With the document file in this location, your handler can read it using the require()
function, as described later.
Save the document JSON file at the same level as lambda_function.py
. For example, this directory structure for the "Hello World" Python skill shows the helloworldDocument.json document in the lambda/py
folder, which contains lambda_function.py
:
| skill.json
|
+---.ask
| config
|
+---hooks
| pre_deploy_hook.ps1
|
+---lambda
| +---py
| | | lambda_function.py
| | | requirements.txt
| | | helloworldDocument.json
\---models
en-US.json
With this in place, your handler can read it using the Python's JSON load and File open class, as described later.
Save the document JSON file in a resources
folder at the same level as src
, then update the pom.xml
file to include the new resources
folder.
For example, this directory structure for the "Hello World" Java skill shows the helloworldDocument.json
document in a resources
folder:
| pom.xml
| README.md
+---models
| en-US.json
|
+---resources
| helloworldDocument.json
|
+---src
| | HelloWorldStreamHandler.java
| |
| +---handlers
| | CancelandStopIntentHandler.java
| | FallbackIntentHandler.java
| | HelloWorldIntentHandler.java
| | HelpIntentHandler.java
| | LaunchRequestHandler.java
| | SessionEndedRequestHandler.java
| | StartOverIntentHandler.java
| |
| \---util
| Constants.java
| HelperMethods.java
The <build>
section of the pom.xml
file then identifies the resources folder with the <resources>
item:
<build>
<sourceDirectory>src</sourceDirectory>
<resources>
<resource>
<directory>resources</directory>
</resource>
</resources>
</build>
With this in place, your handler can read it using the Java File
class, as described later.
Make sure the user's device supports APL
Users can invoke your skill from many different kinds of Alexa-enabled devices, some with screens and some without. Before you include any APL directives in your response, always check the supported interfaces in the request and verify that the device can display your content. In addition, make sure your normal speech output is appropriate for the user's device. For example, don't say something like "see XYZ on the screen" if the user's device does not have a screen.
You can check the supported interfaces for the device in the context.System.device.supportedInterfaces
object included in every request. When the user's device supports APL, this object includes an Alexa.Presentation.APL
object.
This code example uses the Alexa Skills Kit SDK for Node.js (v2).
The Node.js SDK provides the getSupportedInterfaces
function to get the list of all interfaces the user's device supports. Check this list for the Alexa.Presentation.APL
interface before you send any APL directives.
Note the syntax needed to refer to the interface, since the name contains dot (.
) characters.
// global const for the SDK
const Alexa = require('ask-sdk-core');
//...
const HelloWorldIntentHandler = {
canHandle(handlerInput) {
// logic to determine when to use this handler
},
handle(handlerInput) {
// ...other code used with all devices
// Check for APL support on the user's device
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']){
// Code to send APL directives can go here
}
}
This code example uses the Alexa Skills Kit SDK for Python.
You can use the get_supported_interfaces()
method on the
utils
module to retrieve a SupportedInterfaces
object that contains a list of all
interfaces supported by the device. Check the value of alexa_presentation_apl
attribute for APL support. This attribute contains value None
if the device does not support APL.
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_model import Response
# Use request utils
from ask_sdk_core.utils import get_supported_interfaces
# Other imports here
class HelloWorldIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
# logic to determine when to use this handler
def handle(self, handler_input):
# type: (HandlerInput) -> Response
if get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
# code to send APL directives go here
This code example uses the Alexa Skills Kit SDK for Java.
You can use the getSupportedInterfaces()
method on the RequestHelper
class to retrieve a SupportedInterfaces
object that contains a list of all interfaces supported by the device. Call getAlexaPresentationAPL()
to check for APL support. This method returns null
if the device does not support APL.
// Use the RequestHelper class
import com.amazon.ask.request.RequestHelper;
// (other imports here)
public class HelloWorldIntentHandler implements RequestHandler {
@Override
public boolean canHandle(HandlerInput input) {
// logic to determine when to use this handler
}
@Override
public Optional<Response> handle(HandlerInput input) {
// ...other code used with all devices
// Check for APL support on the user's device
if (RequestHelper.forHandlerInput(input)
.getSupportedInterfaces()
.getAlexaPresentationAPL() != null) {
// Code to send APL directives can go here
}
}
}
Note that the Alexa.Presentation.APL
object is included in context.System.device.supportedInterfaces
when both of the following are true:
- The skill has been configured to support APL.
- The user's device supports APL
If the if
statements shown above return false
when you invoke the skill from a device that should support APL (such as an Echo Show), this means that your skill is not configured for APL. Refer back to Configure a Skill to Support APL.
Return RenderDocument in your request handler response
In your skill code, you create request handlers to act on the incoming requests. The handler returns a response with the text for Alexa to speak and directives with other instructions. To display APL content, include the Alexa.Presentation.APL.RenderDocument
directive. The payload of this directive is the APL document to display.
Note the following guidelines for RenderDocument
:
- Return
RenderDocument
only for devices that support theAlexa.Presentation.APL
interface. Check for this as described earlier. - When you build the
RenderDocument
directive, you need to read the document JSON file into a variable that you can pass in thedocument
property of the directive. - The
RenderDocument
directive has a requiredtoken
property. Set this to a string that identifies the document. You can use this token to identify the document in other directives (such asExecuteCommands
), and to determine which document is currently displayed on the viewport. In the code example below, the token is set to "helloworldToken
".
This example shows the code for a simple HelloWorldIntentHandler
that sends the document described earlier. Note that the speech included in the response also differs depending on whether the user's device supports APL.
This code example uses the Alexa Skills Kit SDK for Node.js (v2).
This handler runs when the skill gets an IntentRequest
for HelloWorldIntent
. To read the helloworldDocument.json
file into a variable, you can use the require()
function.
const Alexa = require('ask-sdk-core');
// Read in the APL documents for use in handlers
const helloworldDocument = require('./helloworldDocument.json');
// Tokens used when sending the APL directives
const HELLO_WORLD_TOKEN = 'helloworldToken';
const HelloWorldIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'HelloWorldIntent';
},
handle(handlerInput) {
let speakOutput = 'Hello World!';
let responseBuilder = handlerInput.responseBuilder;
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']){
// Add the RenderDocument directive to the responseBuilder
responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
token: HELLO_WORLD_TOKEN,
document: helloworldDocument
});
// Tailor the speech for a device with a screen.
speakOutput += " You should now also see my greeting on the screen."
} else {
// User's device does not support APL, so tailor the speech to this situation
speakOutput += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV.";
}
return responseBuilder
.speak(speakOutput)
.getResponse();
}
};
This code example uses the Alexa Skills Kit SDK for Python.
This handler runs when the skill gets an IntentRequest
for HelloWorldIntent
. To read the helloworldDocument.json
file into a variable, you can open
the file and load it's contents using json.loads
method. This example shows a utility function _load_apl_document
that takes a file path and returns the json document that can be used into the directive.
The ASK Python SDK includes model classes to construct the RenderDocument
directive and then add it to the response.
import json
from ask_sdk_core.utils import (
is_intent_name, get_supported_interfaces)
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_model import Response
from ask_sdk_model.interfaces.alexa.presentation.apl import (
RenderDocumentDirective)
from typing import Dict, Any
# APL Document file paths for use in handlers
hello_world_doc_path = "helloworldDocument.json"
# Tokens used when sending the APL directives
HELLO_WORLD_TOKEN = "helloworldToken"
def _load_apl_document(file_path):
# type: (str) -> Dict[str, Any]
"""Load the apl json document at the path into a dict object."""
with open(file_path) as f:
return json.load(f)
class HelloWorldIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_intent_name("HelloWorldIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello World!"
response_builder = handler_input.response_builder
if get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
response_builder.add_directive(
RenderDocumentDirective(
token=HELLO_WORLD_TOKEN,
document=_load_apl_document(hello_world_doc_path)
)
)
# Tailor the speech for a device with a screen
speak_output += (" You should now also see my greeting on the "
"screen.")
else:
# User's device does not support APL, so tailor the speech to
# this situation
speak_output += (" This example would be more interesting on a "
"device with a screen, such as an Echo Show or "
"Fire TV.")
return response_builder.speak(speak_output).response
This code example uses the Alexa Skills Kit SDK for Java.
This handler runs when the skill gets an IntentRequest
for HelloWorldIntent
. To read the helloworldDocument.json
file into a variable, you can parse the JSON into a String
, Object
map (HashMap<String, Object>
). This example uses the File
class to read the file, then parses the JSON into a map with the ObjectMapper
class.
The ASK Java SDK includes builder methods to construct the RenderDocument
directive and then add it to the response.
package handlers;
import static com.amazon.ask.request.Predicates.intentName;
import static util.Constants.HELLO_WORLD_TOKEN; // String "helloworldToken"
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import com.amazon.ask.dispatcher.request.handler.HandlerInput;
import com.amazon.ask.dispatcher.request.handler.RequestHandler;
import com.amazon.ask.exception.AskSdkException;
import com.amazon.ask.model.Response;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.RenderDocumentDirective;
import com.amazon.ask.request.RequestHelper;
import com.amazon.ask.response.ResponseBuilder;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
public class HelloWorldIntentHandler implements RequestHandler {
@Override
public boolean canHandle(HandlerInput input) {
return input.matches(intentName("HelloWorldIntent"));
}
@Override
public Optional<Response> handle(HandlerInput input) {
String speechText = "Hello world.";
ResponseBuilder responseBuilder = input.getResponseBuilder();
if (RequestHelper.forHandlerInput(input)
.getSupportedInterfaces()
.getAlexaPresentationAPL() != null) {
try {
// Retrieve the JSON document and put into a string/object map
ObjectMapper mapper = new ObjectMapper();
TypeReference<HashMap<String, Object>> documentMapType =
new TypeReference<HashMap<String, Object>>() {};
Map<String, Object> document = mapper.readValue(
new File("helloworldDocument.json"),
documentMapType);
// Use builder methods in the SDK to create the directive.
RenderDocumentDirective renderDocumentDirective = RenderDocumentDirective.builder()
.withToken(HELLO_WORLD_TOKEN)
.withDocument(document)
.build();
// Add the directive to a responseBuilder.
responseBuilder.addDirective(renderDocumentDirective);
// Tailor the speech for a device with a screen.
speechText += " You should now also see my greeting on the screen.";
} catch (IOException e) {
throw new AskSdkException("Unable to read or deserialize the hello world document", e);
}
} else {
// Change the speech output since the device does not have a screen.
speechText += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV.";
}
// add the speech to the response and return it.
return responseBuilder
.withSpeech(speechText)
.withSimpleCard("Hello World with APL", speechText)
.build();
}
}
Handle a UserEvent request
The Alexa.Presentation.APL.UserEvent
request lets your skill receive messages in response to the user's actions on the device. To do this, you use the SendEvent
command in your APL document. This command sends your skill a UserEvent
request. Use a normal request handler in your skill to take an action on this request.
A common use of SendEvent
/ UserEvent
is for UI elements such as buttons. You define the button to run the SendEvent
command when the user selects the button. Your skill then gets UserEvent
and can respond to the button press.
These sections illustrate this by:
- Modifying the previous
helloworldDocument.json
document to include aSendEvent
command. - Adding a handler to process the
UserEvent
generated by the newSendEvent
command.
Create a document with a button that runs the SendEvent command
This document example displays text, followed by an AlexaButton
responsive component. The primaryAction
property on the AlexaButton
component defines the command that should run when the user selects the button. This button runs two commands in sequence: AnimateItem
to fade the opacity of the Text
component, and SendEvent
to send a UserEvent
request to the skill.
Save this document as "helloworldWithButtonDocument.json
" as described earlier.
{
"type": "APL",
"version": "2024.2",
"description": "This APL document places text on the screen and includes a button that sends the skill a message when selected. The button is a pre-defined responsive component from the alexa-layouts package.",
"import": [
{
"name": "alexa-layouts",
"version": "1.7.0"
}
],
"mainTemplate": {
"parameters": [
"payload"
],
"items": [
{
"type": "Container",
"height": "100vh",
"width": "100vw",
"items": [
{
"type": "Text",
"id": "helloTextComponent",
"height": "75%",
"text": "Hello world! This APL document includes a button from the alexa-layouts package. Touch the button to see what happens.",
"textAlign": "center",
"textAlignVertical": "center",
"paddingLeft": "@spacingSmall",
"paddingRight": "@spacingSmall",
"paddingTop": "@spacingXLarge",
"style": "textStyleBody"
},
{
"type": "AlexaButton",
"alignSelf": "center",
"id": "fadeHelloTextButton",
"buttonText": "This is a button",
"primaryAction": [
{
"type": "AnimateItem",
"duration": 3000,
"componentId": "helloTextComponent",
"value": {
"property": "opacity",
"to": 0
}
},
{
"type": "SendEvent",
"arguments": [
"user clicked the button"
]
}
]
}
]
}
]
}
}
To display this document, you return RenderDocument
from an intent handler. In this example, note that the document to display is called helloworldWithButtonDocument.json
and the token
is helloworldWithButtonToken
:
This code example uses the Alexa Skills Kit SDK for Node.js (v2).
const Alexa = require('ask-sdk-core');
// Read in the APL documents for use in handlers
const helloworldDocument = require('./helloworldDocument.json');
const helloworldWithButtonDocument = require('./helloworldWithButtonDocument.json');
// Tokens used when sending the APL directives
const HELLO_WORLD_TOKEN = 'helloworldToken';
const HELLO_WORLD_WITH_BUTTON_TOKEN = 'helloworldWithButtonToken';
const HelloWorldWithButtonIntentHander = {
canHandle(handlerInput){
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'HelloWorldWithButtonIntent';
},
handle(handlerInput){
let speakOutput = "Hello world.";
let responseBuilder = handlerInput.responseBuilder;
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']){
// Add the RenderDocument directive to the responseBuilder
responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.RenderDocument',
token: HELLO_WORLD_WITH_BUTTON_TOKEN,
document: helloworldWithButtonDocument,
});
// Tailor the speech for a device with a screen.
speakOutput += " Welcome to Alexa Presentation Language. Click the button to see what happens!"
} else {
speakOutput += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV."
}
return responseBuilder
.speak(speakOutput)
//.reprompt('add a reprompt if you want to keep the session open for the user to respond')
.getResponse();
}
}
This code example uses the Alexa Skills Kit SDK for Python.
import json
from ask_sdk_core.utils import (
is_intent_name, get_supported_interfaces)
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_model import Response
from ask_sdk_model.interfaces.alexa.presentation.apl import (
RenderDocumentDirective)
from typing import Dict, Any
# APL Document file paths for use in handlers
hello_world_doc_path = "helloworldDocument.json"
hello_world_button_doc_path = "helloworldWithButtonDocument.json"
# Tokens used when sending the APL directives
HELLO_WORLD_TOKEN = "helloworldToken"
HELLO_WORLD_WITH_BUTTON_TOKEN = "helloworldWithButtonToken"
def _load_apl_document(file_path):
# type: (str) -> Dict[str, Any]
"""Load the apl json document at the path into a dict object."""
with open(file_path) as f:
return json.load(f)
class HelloWorldWithButtonIntentHandler(AbstractRequestHandler):
"""Handler for Hello World Intent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_intent_name("HelloWorldWithButtonIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = "Hello World!"
response_builder = handler_input.response_builder
if get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
response_builder.add_directive(
RenderDocumentDirective(
token=HELLO_WORLD_WITH_BUTTON_TOKEN,
document=_load_apl_document(hello_world_button_doc_path)
)
)
# Tailor the speech for a device with a screen
speak_output += (" Welcome to Alexa Presentation Language. "
"Click the button to see what happens!")
else:
# User's device does not support APL, so tailor the speech to
# this situation
speak_output += (" This example would be more interesting on a "
"device with a screen, such as an Echo Show or "
"Fire TV.")
return response_builder.speak(speak_output).response
This code example uses the Alexa Skills Kit SDK for Java.
package handlers;
import static com.amazon.ask.request.Predicates.intentName;
import static util.Constants.HELLO_WORLD_WITH_BUTTON_TOKEN; // String "helloworldWithButtonToken"
import java.io.File;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import com.amazon.ask.dispatcher.request.handler.HandlerInput;
import com.amazon.ask.dispatcher.request.handler.RequestHandler;
import com.amazon.ask.exception.AskSdkException;
import com.amazon.ask.model.Response;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.RenderDocumentDirective;
import com.amazon.ask.request.RequestHelper;
import com.amazon.ask.response.ResponseBuilder;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
public class HelloWorldWithButtonIntentHandler implements RequestHandler {
@Override
public boolean canHandle(HandlerInput input) {
return input.matches(intentName("HelloWorldWithButtonIntent"));
}
@Override
public Optional<Response> handle(HandlerInput input) {
String speechText = "Hello world.";
ResponseBuilder responseBuilder = input.getResponseBuilder();
if (RequestHelper.forHandlerInput(input)
.getSupportedInterfaces()
.getAlexaPresentationAPL() != null) {
try {
// The JSON file containing the Hello World APL
// document is in the "resources" folder. Retrieve it
// and add it to the directive.
ObjectMapper mapper = new ObjectMapper();
TypeReference<HashMap<String, Object>> documentMapType = new TypeReference<HashMap<String, Object>>() {
};
Map<String, Object> document = mapper.readValue(new File("helloworldWithButtonDocument.json"), documentMapType);
// Build the directive.
RenderDocumentDirective renderDocumentDirective = RenderDocumentDirective.builder()
.withToken(HELLO_WORLD_WITH_BUTTON_TOKEN)
.withDocument(document)
.build();
responseBuilder.addDirective(renderDocumentDirective);
// update speech to mention the screen
speechText += " Welcome to Alexa Presentation Language. Click the button to see what happens!";
} catch (IOException e) {
throw new AskSdkException("Unable to read or deserialize the hello world document", e);
}
} else {
speechText += " This example would be more interesting on a device with a screen, such as an Echo Show or Fire TV.";
}
return responseBuilder
.withSpeech(speechText)
.withSimpleCard("Hello World with APL", speechText)
.build();
}
}
Add a UserEvent request handler
When the user selects the button in the previous example, the text displayed on the screen fades away (the AnimateItem
command). After this, Alexa sends the skill the UserEvent
request (the SendEvent
command). Your skill can respond with actions such as normal Alexa speech and new APL directives.
The UserEvent
handler typically needs to identify the component that triggered the event. This is because a skill might have multiple documents with different buttons and other elements that can send requests to your skill. All of these requests have the type Alexa.Presentation.APL.UserEvent
. To distinguish between events, you can use the source
or arguments
properties on the request.
For example, clicking the button defined in the earlier document sends this request:
{
"request": {
"type": "Alexa.Presentation.APL.UserEvent",
"requestId": "amzn1.echo-api.request.1",
"timestamp": "2019-10-04T18:48:22Z",
"locale": "en-US",
"arguments": [
"user clicked the button"
],
"components": {},
"source": {
"type": "TouchWrapper",
"handler": "Press",
"id": "fadeHelloTextButton",
"value": false
},
"token": "helloworldWithButtonToken"
}
}
The arguments
property contains an array of arguments defined in the arguments
property of the SendEvent
command. The source
property contains information about the component that triggered the event, including the ID for the component.
You can therefore create a handler that operates on any UserEvent
request where the request.source.id
matches the ID of the component as defined in the document. This handler responds to the user pressing the button "fadeHelloTextButton" with speech commenting on the fact that the user pressed the button:
This code example uses the Alexa Skills Kit SDK for Node.js (v2).
const HelloWorldButtonEventHandler = {
canHandle(handlerInput){
// Since an APL skill might have multiple buttons that generate UserEvents,
// use the event source ID to determine the button press that triggered
// this event and use the correct handler. In this example, the string
// 'fadeHelloTextButton' is the ID we set on the AlexaButton in the document.
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'Alexa.Presentation.APL.UserEvent'
&& handlerInput.requestEnvelope.request.source.id === 'fadeHelloTextButton';
},
handle(handlerInput){
const speakOutput = "Thank you for clicking the button! I imagine you already noticed that the text faded away. Tell me to start over to bring it back!";
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt("Tell me to start over if you want me to bring the text back into view. Or, you can just say hello again.")
.getResponse();
}
}
This code example uses the Alexa Skills Kit SDK for Python.
from ask_sdk_core.utils import (
is_request_type, is_intent_name, get_supported_interfaces)
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_model import Response
from ask_sdk_model.interfaces.alexa.presentation.apl import UserEvent
class HelloWorldButtonEventHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
# Since an APL skill might have multiple buttons that generate
# UserEvents, use the event source ID to determine the button press
# that triggered this event and use the correct handler.
# In this example, the string 'fadeHelloTextButton' is the ID we set
# on the AlexaButton in the document.
# The user_event.source is a dict object. We can retrieve the id
# using the get method on the dictionary.
if is_request_type("Alexa.Presentation.APL.UserEvent")(handler_input):
user_event = handler_input.request_envelope.request # type: UserEvent
return user_event.source.get("id") == "fadeHelloTextButton"
else:
return False
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speech_text = ("Thank you for clicking the button! I imagine you "
"already noticed that the text faded away. Tell me to "
"start over to bring it back!")
return handler_input.response_builder.speak(speech_text).ask(
"Tell me to start over if you want me to bring the text back into "
"view. Or, you can just say hello again.").response
This code example uses the Alexa Skills Kit SDK for Java.
package handlers;
import java.util.Map;
import java.util.Optional;
import com.amazon.ask.dispatcher.request.handler.HandlerInput;
import com.amazon.ask.dispatcher.request.handler.impl.UserEventHandler;
import com.amazon.ask.model.Response;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.UserEvent;
public class HelloWorldButtonEventHandler implements UserEventHandler {
@Override
public boolean canHandle(HandlerInput input, UserEvent userEvent) {
// This is a typed handler, so it only runs for UserEvents.
// Since an APL skill might have multiple buttons that generate UserEvents,
// use the event source ID to determine the button press that triggered
// this event and use the correct handler. In this example, the string
// 'fadeHelloTextButton' is the ID we set on the AlexaButton in the document.
// The userEvent.getSource() method returns an Object. Cast as a Map so we
// can retrieve the id.
Map<String,Object> eventSourceObject = (Map<String,Object>) userEvent.getSource();
String eventSourceId = (String) eventSourceObject.get("id");
return eventSourceId.equals("fadeHelloTextButton");
}
@Override
public Optional<Response> handle(HandlerInput input, UserEvent userEvent) {
String speechText = "Thank you for clicking the button! I imagine you already noticed that the text faded away. Tell me to start over to bring it back!";
return input.getResponseBuilder()
.withSpeech(speechText)
.withReprompt("Tell me to start over if you want me to bring the text back into view. Or, you can just say hello again.")
.build();
}
}
The above code sends a response that tells Alexa to say "Thank you for clicking the button! I imagine you already noticed that the text faded away. Tell me to start over to bring it back!"
Return ExecuteCommands in a request handler response
Your skill can trigger APL commands with the Alexa.Presentation.APL.ExecuteCommands
directive. This lets you run a command in response to a user interaction such as a spoken utterance.
The ExecuteCommands
directive has a required token
property. This must match the token
that you provided in the RenderDocument
directive for the currently-displayed document. If the tokens don't match, the device does not run the commands. Before you send ExecuteCommands
to run against a document that was rendered in an earlier directive, check the context
in the request to confirm that the document you expect is displayed.
You can also include ExecuteCommands
in the same response as a RenderDocument
directive. Again, the token
needs to be the same in both directives.
This code example adds a StartOverIntentHandler
. When the user says "start over", the skill checks the current visual context to determine whether the device is displaying the document with the token "helloworldWithButtonToken
". If it is, the skill sends ExecuteCommands
with an AnimateItem
command that changes the opacity of the Text
component from 0 back to 1 over three seconds, to create a fade-in effect.
This code example uses the Alexa Skills Kit SDK for Node.js (v2).
const Alexa = require('ask-sdk-core');
// Read in the APL documents for use in handlers
const helloworldDocument = require('./helloworldDocument.json');
const helloworldWithButtonDocument = require('./helloworldWithButtonDocument.json');
// Tokens used when sending the APL directives
const HELLO_WORLD_TOKEN = 'helloworldToken';
const HELLO_WORLD_WITH_BUTTON_TOKEN = 'helloworldWithButtonToken';
const StartOverIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
&& Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.StartOverIntent';
},
handle(handlerInput) {
let speakOutput = '';
let responseBuilder = handlerInput.responseBuilder;
if (Alexa.getSupportedInterfaces(handlerInput.requestEnvelope)['Alexa.Presentation.APL']) {
speakOutput = "";
// Get the APL visual context information and determine whether the
// device is displaying the document with the token HELLO_WORLD_WITH_BUTTON_TOKEN
const contextApl = handlerInput.requestEnvelope.context['Alexa.Presentation.APL'];
if (contextApl && contextApl.token === HELLO_WORLD_WITH_BUTTON_TOKEN){
// build an ExecuteCommands directive to change the opacity of the Text component
// back to 1 so that it displays again. Note that the token included in the
// directive MUST match the token of the document that is displaying
speakOutput = "OK, I'm going to try to bring that text back into view.";
// Create an APL command that gradually changes the opacity of the
// component back to 1 over 3 seconds
const animateItemCommand = {
type: "AnimateItem",
componentId: "helloTextComponent",
duration: 3000,
value: [
{
property: "opacity",
to: 1
}
]
}
// Add the command to an ExecuteCommands directive and add this to
// the response
responseBuilder.addDirective({
type: 'Alexa.Presentation.APL.ExecuteCommands',
token: HELLO_WORLD_WITH_BUTTON_TOKEN,
commands: [animateItemCommand]
})
} else {
// Device is NOT displaying the expected document, so provide
// relevant output speech.
speakOutput = "Hmm, there isn't anything for me to reset. Try invoking the 'hello world with button intent', then click the button and see what happens!";
}
} else {
speakOutput = "Hello, this sample is more interesting when used on a device with a screen. Try it on an Echo Show, Echo Spot or a Fire TV device.";
}
return responseBuilder
.speak(speakOutput)
.getResponse();
}
};
This code example uses the Alexa Skills Kit SDK for Python.
import json
from ask_sdk_core.utils import (
is_request_type, is_intent_name, get_supported_interfaces)
from ask_sdk_core.handler_input import HandlerInput
from ask_sdk_core.dispatch_components import AbstractRequestHandler
from ask_sdk_model import Response
from ask_sdk_model.interfaces.alexa.presentation.apl import (
RenderDocumentDirective, AnimatedOpacityProperty, AnimateItemCommand,
ExecuteCommandsDirective, UserEvent)
from typing import Dict, Any
# APL Document file paths for use in handlers
hello_world_doc_path = "helloworldDocument.json"
hello_world_button_doc_path = "helloworldWithButtonDocument.json"
# Tokens used when sending the APL directives
HELLO_WORLD_TOKEN = "helloworldToken"
HELLO_WORLD_WITH_BUTTON_TOKEN = "helloworldWithButtonToken"
def _load_apl_document(file_path):
# type: (str) -> Dict[str, Any]
"""Load the apl json document at the path into a dict object."""
with open(file_path) as f:
return json.load(f)
class StartOverIntentHandler(AbstractRequestHandler):
"""Handler for StartOverIntent."""
def can_handle(self, handler_input):
# type: (HandlerInput) -> bool
return is_intent_name("AMAZON.StartOverIntent")(handler_input)
def handle(self, handler_input):
# type: (HandlerInput) -> Response
speak_output = None
response_builder = handler_input.response_builder
if get_supported_interfaces(
handler_input).alexa_presentation_apl is not None:
# Get the APL visual context information from the JSON request
# and check that the document identified by the
# HELLO_WORLD_WITH_BUTTON_TOKEN token ("helloworldWithButtonToken")
# is currently displayed.
context_apl = handler_input.request_envelope.context.alexa_presentation_apl
if (context_apl is not None and
context_apl.token == HELLO_WORLD_WITH_BUTTON_TOKEN):
speak_output = ("OK, I'm going to try to bring that text "
"back into view.")
animate_item_command = AnimateItemCommand(
component_id="helloTextComponent",
duration=3000,
value=[AnimatedOpacityProperty(to=1.0)]
)
response_builder.add_directive(
ExecuteCommandsDirective(
token=HELLO_WORLD_WITH_BUTTON_TOKEN,
commands=[animate_item_command]
)
)
else:
# Device is NOT displaying the expected document, so provide
# relevant output speech.
speak_output = ("Hmm, there isn't anything for me to reset. "
"Try invoking the 'hello world with button "
"intent', then click the button and see what "
"happens!")
else:
# User's device does not support APL, so tailor the speech to
# this situation
speak_output += ("Hello, this example would be more interesting on "
"a device with a screen. Try it on an Echo Show, "
"Echo Spot or a Fire TV device.")
return response_builder.speak(speak_output).response
This code example uses the Alexa Skills Kit SDK for Java.
package handlers;
import static com.amazon.ask.request.Predicates.intentName;
import static util.Constants.HELLO_WORLD_WITH_BUTTON_TOKEN; // String "helloworldWithButtonToken"
import java.util.Optional;
import com.amazon.ask.dispatcher.request.handler.HandlerInput;
import com.amazon.ask.dispatcher.request.handler.RequestHandler;
import com.amazon.ask.model.Response;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.AnimateItemCommand;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.AnimatedOpacityProperty;
import com.amazon.ask.model.interfaces.alexa.presentation.apl.ExecuteCommandsDirective;
import com.amazon.ask.request.RequestHelper;
import com.amazon.ask.response.ResponseBuilder;
import com.fasterxml.jackson.databind.JsonNode;
public class StartOverIntentHandler implements RequestHandler {
@Override
public boolean canHandle(HandlerInput input) {
return input.matches(intentName("AMAZON.StartOverIntent"));
}
@Override
public Optional<Response> handle(HandlerInput input) {
String speechText;
ResponseBuilder responseBuilder = input.getResponseBuilder();
if (RequestHelper.forHandlerInput(input)
.getSupportedInterfaces()
.getAlexaPresentationAPL() != null) {
// Get the APL visual context information from the JSON request
// and check that the document identified by the
// HELLO_WORLD_WITH_BUTTON_TOKEN token ("helloworldWithButtonToken")
// is currently displayed.
JsonNode context = input.getRequestEnvelopeJson().get("context");
if (context.has("Alexa.Presentation.APL") &&
(context.get("Alexa.Presentation.APL")
.get("token")
.asText()
.equals(HELLO_WORLD_WITH_BUTTON_TOKEN))){
speechText = "OK, I'm going to try to bring that text back into view.";
AnimatedOpacityProperty animatedOpacityProperty = AnimatedOpacityProperty.builder()
.withTo(1.0)
.build();
AnimateItemCommand animateItemCommand = AnimateItemCommand.builder()
.withComponentId("helloTextComponent")
.withDuration(3000)
.addValueItem(animatedOpacityProperty)
.build();
// Note that the token for ExecuteCommands must
// match the token provided in the RenderDocument
// directive that originally displayed the document
ExecuteCommandsDirective executeCommandsDirective = ExecuteCommandsDirective.builder()
.withToken(HELLO_WORLD_WITH_BUTTON_TOKEN)
.addCommandsItem(animateItemCommand)
.build();
responseBuilder.addDirective(executeCommandsDirective);
} else {
// Device is NOT displaying the expected document, so provide
// relevant output speech.
speechText = "Hmm, there isn't anything for me to reset. Try invoking the 'hello world with button intent', then click the button and see what happens!";
}
} else {
speechText = "Hello, this sample is more interesting when used on a device with a screen. Try it on an Echo Show, Echo Spot or a Fire TV device.";
}
return responseBuilder
.withSpeech(speechText)
.build();
}
}
Related topics
- APL Document
- Alexa.Presentation.APL interface
- Alexa Skills Kit SDKs
- AnimateItem Command
- SendEvent Command
- Skill Sample – Node.js First APL Skill
Last updated: Nov 28, 2023